Search Results

Search found 2821 results on 113 pages for 'curious jo'.

Page 100/113 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Chrome: Dynamically created <style> tag does not have content?

    - by Shizhidi
    Hello. I encountered a weird problem when trying to write a cross-browser script. Basically my header looks like this <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> </head> Then in the body tag: <p id="hey">Hey</p> <input type="button" value="attachStyle" name="attachStyle" onclick="attachStyle();"></input> <script> function attachStyle() { var strVar=""; strVar += "<style type='text\/css'>#hey {border:5px solid red;}<\/script>"; $("head").append(strVar); } </script> The button works in Firefox, but not in Chrome. When I looked at the html DOM elements in the developer tool, the style tag was inserted but without content, like this: <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <style type='text/css'></script> </head> I'm curious as to what causes this? And how to create CSS style in a way that is cross-browser? Thanks!

    Read the article

  • Two part question about submitting bluetooth-enabled apps for the iPhone

    - by Kyle
    I have a couple questions about submitting blue-tooth enabled apps on the iPhone. I want to first say that bluetooth is merely an option in the application. The application does not completely rely on bluetooth as there are many modes the user can go in. First, do they require you to have the "peer-peer" key set in UIRequiredDeviceCapabilities even if bluetooth interface options can be disabled or hidden for non-bluetooth enabled devices? Basically, it's just an OPTION in the game and there are many other modes the player can play.. Does Apple not allow you to do that? I'm just curious, because it seems like something they would do. Adding to that, how do you check for it's functionality at runtime? In essence, how do you check UIRequiredDeviceCapabilities at runtime. I'm aware of checking iPhone device types, so would that be a proper way of going about it? I'm also sort of unaware which devices can run bluetooth gamekit, there doesn't seem to be a proper reference at the SDK site, or I'm unable to find it. Thanks for reading! [edit] I can confirm the existance of somebody rejected for submitting a bluetooth enabled app which didn't work on a iPhone 2G.. Of course, they didn't say if that was the MAIN function of the app, though.

    Read the article

  • What is happening in Crockford's object creation technique?

    - by Chris Noe
    There are only 3 lines of code, and yet I'm having trouble fully grasping this: Object.create = function (o) { function F() {} F.prototype = o; return new F(); }; newObject = Object.create(oldObject); (from Prototypal Inheritance) 1) Object.create() starts out by creating an empty function called F. I'm thinking that a function is a kind of object. Where is this F object being stored? Globally I guess. 2) Next our oldObject, passed in as o, becomes the prototype of function F. Function (i.e., object) F now "inherits" from our oldObject, in the sense that name resolution will route through it. Good, but I'm curious what the default prototype is for an object, Object? Is that also true for a function-object? 3) Finally, F is instantiated and returned, becoming our newObject. Is the "new" operation strictly necessary here? Doesn't F already provide what we need, or is there a critical difference between function-objects and non-function-objects? Clearly it won't be possible to have a constructor function using this technique. What happens the next time Object.create() is called? Is global function F overwritten? Surely it is not reused, because that would alter previously configured objects. And what happens if multiple threads call Object.create(), is there any sort of synchronization to prevent race conditions on F?

    Read the article

  • How OpenStack Swift handles concurrent restful API request?

    - by Chen Xie
    I installed a swift service and was trying to know the capability of handling concurrent request. So I created massive amount of threads in Java, and sent it via the RestFUL API Not surprisingly, when the number of requests climb up, the program started to throw out exceptions. Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) But can anyone tell me how that time outhappened? I am curious of how SWIFT handles those requests. Is that by queuing the requests and because there are too many requests in the queue and wait for too long time and it's just get kicked out from the queue? If this holds, does it mean that it's an asynchronized mechanism to handle requests? Thanks.

    Read the article

  • std::conditional compile-time branch evaluation

    - by cmannett85
    Compiling this: template < class T, class Y, class ...Args > struct isSame { static constexpr bool value = std::conditional< sizeof...( Args ), typename std::conditional< std::is_same< T, Y >::value, isSame< Y, Args... >, // Error! std::false_type >::type, std::is_same< T, Y > >::type::value; }; int main() { qDebug() << isSame< double, int >::value; return EXIT_SUCCESS; } Gives me this compiler error: error: wrong number of template arguments (1, should be 2 or more) The issue is that isSame< double, int > has an empty Args parameter pack, so isSame< Y, Args... > effectively becomes isSame< Y > which does not match the signature. But my question is: Why is that branch being evaluated at all? sizeof...( Args ) is false, so the inner std:conditional should not be evaluated. This isn't a runtime piece of code, the compiler knows that sizeof..( Args ) will never be true with the given template types. If you're curious, it's supposed to be a variadic version of std::is_same, not that it works...

    Read the article

  • Changing where a resource is pulled during runtime?

    - by Brandon
    I have a website that goes out to multiple clients. Sometimes a client will insist on minor changes. For reasons beyond my control, I have to comply no matter how minor the request. Usually this isn't a problem, I would just create a client specific version of the user control or page and overwrite the default one during build time or make a configuration setting to handle it. Now that I am localizing the site, I'm curious about the best way to go about making minor wording changes. Lets say I have a resource file called Resources.resx that has 300 resources in it. It has a resource called Continue. English value is "Continue", the French value is "Continuez". Now one client, for whatever reason, wants it to say "Next" and "Après" and the others want to keep it the same. What is the best way to accomodate a request like this? (This is just a simple example). The only two ways I can think of is to Create another Resources.resx specific to the client, and replace the .dll during build time. Since I'd be completely replacing the dll, the new resource file would have to contain all 300 strings. The obvious problem being that I now have 2 resource files, each with 300 strings to maintain. Create a custom user control/page and change it to use a custom resource file. e.g. SignIn.ascx would be replaced during the build and it would pull its resources from ClientName.resx instead of Resources.resx. Are there any other things I could try? Is there any way to change it so that the application will always look in a ClientResources.resx file for the overridden values before actually look at the specified resource file?

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Is there some formal way to update the browser detection files for ASP.Net?

    - by Deane
    I have an ASP.Net site on which we're using control adapters. We have the adapters mapped to a "refID" of "Default." These adapters are working fine on all browsers except Chrome and Safari. For those browsers, they do not execute. I've given up trying to figure out why -- I have a question here on SO that no one has been able to answer, and I've been researching it for days now. It's just inexplicable. I have tested the same code in my local environment, and it works just fine. Additionally, no one else can replicate my problem on other servers. It seems to be somehow confined to the machines at my client's site. Could they be somehow out-of-date? If this is the case, is there some way to "update" the .browser files? I'm half-tempted to just copy the .browser files out of the Framework directory from my machine over to theirs, but I'm curious is there's something more formal than this? Is there some other source of data that ASP.Net uses for browser detection other than these files?

    Read the article

  • How can I make nested string splits?

    - by Statement
    I have what seemed at first to be a trivial problem but turned out to become something I can't figure out how to easily solve. I need to be able to store lists of items in a string. Then those items in turn can be a list, or some other value that may contain my separator character. I have two different methods that unpack the two different cases but I realized I need to encode the contained value from any separator characters used with string.Split. To illustrate the problem: string[] nested = { "mary;john;carl", "dog;cat;fish", "plainValue" } string list = string.Join(";", nested); string[] unnested = list.Split(';'); // EEK! returns 7 items, expected 3! This would produce a list "mary;john;carl;dog;cat;fish;plainValue", a value I can't split to get the three original nested strings from. Indeed, instead of the three original strings, I'd get 7 strings on split and this approach thus doesn't work at all. What I want is to allow the values in my string to be encoded so I can unpack/split the contents just the way before I packed/join them. I assume I might need to go away from string.Split and string.Join and that is perfectly fine. I might just have overlooked some useful class or method. How can I allow any string values to be packed / unpacked into lists? I prefer neat, simple solutions over bulky if possible. For the curious mind, I am making extensions for PlayerPrefs in Unity3D, and I can only work with ints, floats and strings. Thus I chose strings to be my data carrier. This is why I am making this nested list of strings.

    Read the article

  • Do ORMs normally allow circular relations? If so, how would they handle it?

    - by SeanJA
    I was hacking around trying to make a basic orm that has support for the one => one and one => many relationships. I think I succeeded somewhat, but I am curious about how to handle circular relationships. Say you had something like this: user::hasOne('car'); car::hasMany('wheels'); car::property('type'); wheel::hasOne('car'); You could then do this (theoretically): $u = new user(); echo $u->car->wheels[0]->car->wheels[1]->car->wheels[2]->car->wheels[3]->type; #=> "monster truck" Now, I am not sure why you would want to do this. It seems like it wastes a whole pile of memory and time just to get to something that could have been done in a much shorter way. In my small ORM, I now have 4 copies of the wheel class, and 4 copies of the car class in memory, which causes a problem if I update one of them and save it back to the database, the rest get out of date, and could overwrite the changes that were already made. How do other ORMs handle circular references? Do they even allow it? Do they go back up the tree and create a pointer to one of the parents? DO they let the coder shoot themselves in the foot if they are silly enough to go around in circles?

    Read the article

  • Is there a way to increase the efficiency of shared_ptr by storing the reference count inside the co

    - by BillyONeal
    Hello everyone :) This is becoming a common pattern in my code, for when I need to manage an object that needs to be noncopyable because either A. it is "heavy" or B. it is an operating system resource, such as a critical section: class Resource; class Implementation : public boost::noncopyable { friend class Resource; HANDLE someData; Implementation(HANDLE input) : someData(input) {}; void SomeMethodThatActsOnHandle() { //Do stuff }; public: ~Implementation() { FreeHandle(someData) }; }; class Resource { boost::shared_ptr<Implementation> impl; public: Resource(int argA) explicit { HANDLE handle = SomeLegacyCApiThatMakesSomething(argA); if (handle == INVALID_HANDLE_VALUE) throw SomeTypeOfException(); impl.reset(new Implementation(handle)); }; void SomeMethodThatActsOnTheResource() { impl->SomeMethodThatActsOnTheHandle(); }; }; This way, shared_ptr takes care of the reference counting headaches, allowing Resource to be copyable, even though the underlying handle should only be closed once all references to it are destroyed. However, it seems like we could save the overhead of allocating shared_ptr's reference counts and such separately if we could move that data inside Implementation somehow, like boost's intrusive containers do. If this is making the premature optimization hackles nag some people, I actually agree that I don't need this for my current project. But I'm curious if it is possible.

    Read the article

  • Is it possible to create a python iterator over pre-defined mutable data?

    - by Wilduck
    I might be doing this wrong, if I am, let me know, but I'm curious if the following is possible: I have a class that holds a number of dictionaries, each of which pairs names to a different set of objects of a given class. For example: items = {"ball" : ItemInstance1, "sword" : ItemInstance2} people = {"Jerry" : PersonInstance1, "Bob" : PersonInstance2, "Jill" : PersonInstance3} My class would then hold the current items and people that are availible, and these would be subject to change as the state changes: Class State: def __init__(self, items, people): self.items = items self.people = people I would like to define a iter() and next() method such that it iterates through all of the values in its attributes. My first question is whether or not this is possible. If it is, will it be able to support a situation as follows: I define items and people as above then: state = State(items, people) for names, thing in state: print name + " is " + thing.color items[cheese] = ItemInstance3 for names, thing in state: print name + " weighs " + thing.weight While I feel like this would be usefull in the code I have, I don't know if it's either possible or the right approach. Everything I've read about user defined iterators has suggested that each instance of them is one use only.

    Read the article

  • How do common web frameworks (Django, Rails, Symfony, etc) handle multiple instances of the same plu

    - by Steven Wei
    Do any of the popular web frameworks solve this problem well? Here's an example: suppose you're running one of these web frameworks and you want to install a blog plugin. Except instead of a single blog, you need to run two separate instances of the blog plugin, and you want to keep them segregated. Or say you want to install multiple instances of a user authentication plugin, because you want to segregate your administrative users from your customer user accounts. Or say you want to install multiple instances of a wiki plugin for different parts of your site, or multiple instances of a comments plugin, or whatever else. It seems to me that at the basic level, each instance of plugin would need to be able to configured with a different set of database tables, and would need to be 'installed' at a different URL path. My experience is mostly with Django and Symfony, and I haven't seen a clean solution to this problem in either of them. They both tend to assume that each plugin (or app, in Django's case) is only ever going to be installed once. I'm curious if the Rails folks have figured out a clean solution to this problem, or any other framework authors (in any language). And if you were going to design a solution to this problem, what would it look like?

    Read the article

  • Storing Credit Card Numbers in SESSION - ways around it?

    - by JM4
    I am well aware of PCI Compliance so don't need an earful about storing CC numbers (and especially CVV nums) within our company database during checkout process. However, I want to be safe as possible when handling sensitive consumer information and am curious how to get around passing CC numbers from page to page WITHOUT using SESSION variables if at all possible. My site is built in this way: Step 1) collect Credit Card information from customer - when customer hits submit, the information is first run through JS validation, then run through PHP validation, if all passes he moves to step 2. Step 2) Information is displayed on a review page for customer to make sure the details of their upcoming transaction are shown. Only the first 6 and last 4 of the CC are shown on this page but card type, and exp date are shwon fully. If he clicks proceed, Step 3) The information is sent to another php page which runs one last validation, sends information through secure payment gateway, and string is returned with details. Step 4) If all is good and well, the consumer information (personal, not CC) is stored in DB and redirected to a completion page. If anything is bad, he is informed and told to revisit the CC processing page to try again (max of 3 times). Any suggestions?

    Read the article

  • firefox render order problem with div tag

    - by flavour404
    I have a div tag which I populate dynamically. The problem is that in Firefox when i do a test for size(height) I seem to need to run it twice in order to get the correct size. This is the code: alert("h = " + h + " height:" + document.getElementById("thumbDiv").clientHeight); Ignore 'h' for the time being, what I am curious to know is what is the correct way to get the div tags height in firefox. In ie I use offsetHeight which works for my purposes perfectly. The other thing is the render order in firefox. I populate the div and then query the height with .clientHeight and I get 102, which is I am assuming the empty height of the tag as I have set no height via style, if I press the button again I then get the height of the div with the enlcosed html page which I am pushing into the div. Its odd, and slightly annoying. I am trying to determine if there is enough room in the browser to display the div contents in their entireity, if not then I am disabling certain features otherwise I get into an infinite scroll problem... Thanks, R. Thanks R.

    Read the article

  • Fun with casting and inheritance

    - by Vaccano
    NOTE: This question is written in a C# like pseudo code, but I am really going to ask which languages have a solution. Please don't get hung up on syntax. Say I have two classes: class AngleLabel: CustomLabel { public bool Bold; // code to allow the label to be on an angle } class Label: CustomLabel { public bool Bold; // Code for a normal label // Maybe has code not in an AngleLabel (align for example). } They both decend from this class: class CustomLabel: Control { protected bool Bold; } The bold field is exposed as public in the descended classes. No interfaces are available on the classes. Now, I have a method that I want to beable to pass in a CustomLabel and set the Bold property. Can this be done without having to 1) find out what the real class of the object is and 2) cast to that object and then 3) make seperate code for each variable of each label type to set bold. Kind of like this: public void SetBold(customLabel: CustomLabel) { AngleLabel angleLabel; NormalLabel normalLabel; if (angleLabel is AngleLabel ) { angleLabel= customLabel as AngleLabel angleLabel.Bold = true; } if (label is Label) { normalLabel = customLabel as Label normalLabel .Bold = true; } } It would be nice to maybe make one cast and and then set bold on one variable. What I was musing about was to make a fourth class that just exposes the bold variable and cast my custom label to that class. Would that work? If so, which languages would it work for? (This example is drawn from an old version of Delphi (Delphi 5)). I don't know if it would work for that language, (I still need to try it out) but I am curious if it would work for C++, C# or Java. If not, any ideas on what would work? (Remember no interfaces are provided and I can not modify the classes.) Any one have a guess?

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • Does it ever make sense to make a fundamental (non-pointer) parameter const?

    - by Scott Smith
    I recently had an exchange with another C++ developer about the following use of const: void Foo(const int bar); He felt that using const in this way was good practice. I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail. Not the end of the world, but certainly not something to be recommended as good practice. I'm curious as to what others think on this issue. Edit: OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const. One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.

    Read the article

  • PayPal sandbox anomalies

    - by Christian
    When testing some donations on my local machine, I set various key=value pairs to do various things (return to specific thank you page, get POST data from PayPal and not GET data and others) I also built my code around the response from the PayPal sandbox. BUT, when my code goes to the production server and we switch on live payments and test with real accounts and money, a few strange things happen; We get a GET response from PayPal - the URL is filled with crap. We get no transaction details. This is the biggie, no name, no txn_id, no dates, nothing. We get a handful of keys etc, its not totally empty and the payment has gone through, but nowhere near the verbosity of the sandbox. Curious about why this might be? It doesn't really make sense to have a sandbox (or dev environment) that is substantially different from the production environment. Or, am I missing something? EDIT: Still no response to my question in the PayPal Developer Forums. I don't even get a donation amount back from PayPal. Is this a setting maybe? EDIT #2: Two of you have suggested to check PDT and Auto-Return. The data analytics guy for the project only 2 hrs ago suggested the same. I have asked the client to confirm this. I can't see a setting for it in the Sandbox so can assume that it is enabled by default?

    Read the article

  • How safe and reliable are C++ String Literals?

    - by DoctorT
    So, I'm wanting to get a better grasp on how string literals in C++ work. I'm mostly concerned with situations where you're assigning the address of a string literal to a pointer, and passing it around. For example: char* advice = "Don't stick your hands in the toaster."; Now lets say I just pass this string around by copying pointers for the duration of the program. Sure, it's probably not a good idea, but I'm curious what would actually be going on behind the scenes. For another example, let's say we make a function that returns a string literal: char* foo() { // function does does stuff return "Yikes!"; // somebody's feeble attempt at an error message } Now lets say this function is called very often, and the string literal is only used about half the time it's called: // situation #1: it's just randomly called without heed to the return value foo(); // situation #2: the returned string is kept and used for who knows how long char* retVal = foo(); In the first situation, what's actually happening? Is the string just created but not used, and never deallocated? In the second situation, is the string going to be maintained as long as the user finds need for it? What happens when it isn't needed anymore... will that memory be freed up then (assuming nothing points to that space anymore)? Don't get me wrong, I'm not planning on using string literals like this. I'm planning on using a container to keep my strings in check (probably std::string). I'm mostly just wanting to know if these situations could cause problems either for memory management or corrupted data.

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Segmentation fault in Qt application framework

    - by yan bellavance
    this generates a segmentation fault becuase of "QColor colorMap[9]";. If I remove colorMap the segmentation fault goes away. If I put it back. It comes back. If I do a clean all then build all, it goes away. If I increase its arraysize it comes back. On the other hand if I reduce it it doesnt come back. I tired adding this array to another project and What could be happening. I am really curious to know. I have removed everything else in that class. This widget subclassed is used to promote a widget in a QMainWindow. class LevelIndicator : public QWidget { public: LevelIndicator(QWidget * parent); void paintEvent(QPaintEvent * event ); float percent; QColor colorMap[9]; int NUM_GRADS; }; the error happens inside ui_mainwindow.h at one of these lines: hpaFwdPwrLvl->setObjectName(QString::fromUtf8("hpaFwdPwrLvl")); verticalLayout->addWidget(hpaFwdPwrLvl); I know i am not providing much but I will give alink to the app. Im trying to see if anyone has a quick answer for this.

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >