Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 292/335 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • jQuery plugin Private functions inside Vs. Outside the each loop

    - by Pablo
    What is the difference between including private functions in a jQuery plugin in the examples below: Outside the loop: (function( $ ){ var defaults = {}; $.fn.cmFlex = function(opts) { this.each(function() { var $this = $(this); //Element specific options var o = $.extend({}, defaults, opts); //Code here }); function f1(){.... function f3(){.... function f2(){.... }; })( jQuery ); Inside the loop: (function( $ ){ var defaults = {}; $.fn.cmFlex = function(opts) { this.each(function() { var $this = $(this); //Element specific options var o = $.extend({}, defaults, opts); function f1(){.... function f3(){.... function f2(){.... }); }; })( jQuery ); The advantage of including the functions in the loop is that i will be able to access the $this variable as well as the Element specific options from f1() f2() f3(), are there any disadvantages to this?

    Read the article

  • cfdiv working in FF and Safari but does not show up in IE

    - by JS
    Within a form I have a button that launches a cfwindow, then presents a search screen for the user to make a selection. Once selection is made, the cfwindow closes and the selected content shows in the main page by being bound to a cfdiv. This all works fine in FF but the cfdiv doesn't show at all in IE. In IE, the cfwindow works, the select works, but then no bound page. I have tried setting bindonload and that made no difference (and I need it to be true if there is content that is pulled in via a query when it loads). All I have been able to find so far regarding this issue is setting bindonload to false and putting the cfdiv outside of the form but that's not possible in my current design. Here is a snippet which is inside of a larger form (and all necessary tags are cfimported). <td class="left" style="white-space:nowrap;"> <cfoutput>#n#</cfoutput>.&nbsp;<cfinput type="button" value="Select" name="x#n#Select#i#" onClick="ColdFusion.Window.create('x#n#Select#i#', 'Exercise Lookup', 'xSelect/xSelect.cfm?xNameVar=x#n#S#i#&window=x#n#Select#i#&workout=workout#i#', {x:100,y:100,height:500,width:720,modal:true,closable:true,draggable:true,resizable:true,center:true,initshow:true,minheight:200,minwidth:200 })" /> &nbsp; <cfdiv id="x#n#S#i#" tagName="x#n#S#i#" bind="url:xSelect/x.cfm"></cfdiv> <cfinput type="hidden" name="x#n#s#i#" value="#n#"> </td> Thanks for any help.

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • How often should network traffic/collisions cause SNMP Sets to fail?

    - by A. Levy
    My team has a situation where an SNMP SET will fail once every two weeks or so. Since this set happens automatically, we don't necessarily notice it immediately when it fails, and this can result in an inconsistent configuration and associated wailing and gnashing of teeth. The plan is to fix this by having our software automatically retry the SET when it fails. The problem is, we aren't sure why the failure is happening. My (extremely limited) knowledge of SNMP isn't particularly helpful in diagnosing this problem, so I thought I'd ask StackOverflow for some advice. We think that every so often a spike in network traffic will cause the SET to fail. Since SNMP uses UDP for communication, I would think it would be relatively easy for a command to be drowned out if traffic was high for a short period of time. However, I have no idea how common this is. We have a small network with a single cisco router and there are less than a dozen SNMP controlled devices on that network. In addition to the SNMP traffic, there are some status web pages being loaded from the various devices. In case it makes a difference, I believe we are using the AdventNet SNMP API version 4.0.4 for Java. Does it sound reasonable that there will be some SET commands dropped occasionally, or should we be looking for other causes?

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • How to make checkboxes have the same submit behavior as other inputs?

    - by Tim Santeford
    I have a search form where several checkboxes are checked by default. When the form submits, as a GET, the url will only contain the list of checkboxes that were left checked. http://www.example.com/page/?checkbox1=yes&checkbox2=yes It is difficult with this scenario to determine the difference between when a user first arrives at this search page and when they submit the form with all checkboxes unchecked because the querystrings look the same. To combat this problem I have started injecting a hidden field before the checkbox with the same name and a 'no' value. When the checkbox is unchecked the browser will send the hidden field's no value and when the checkbox is set then the browser is overriding the hidden field with the checkbox's 'yes' value. <input type="hidden" name="checkbox1" value="no" /> <input type="checkbox" name="checkbox1" value="yes" /> when the user submits the form with all checkboxes unchecked I get this querystring: http://www.example.com/page/?checkbox1=no&checkbox2=no This seems to work on ff, chrome, ie5.5+ so I'am I safe in using this method or is there a better way to make checkboxes submit like inputs and selects?

    Read the article

  • Please tell me what is wrong with my threading!!!

    - by kiddo
    I have a function where I will compress a bunch of files into a single compressed file..it is taking a long time(to compress),so I tried implementing threading in my application..Say if I have 20 files for compression,I separated that as 5*4=20,inorder to do that I have separate variables(which are used for compression) for all 4 threads in order to avoid locks and I will wait until the 4 thread finishes..Now..the threads are working but i see no improvement in their performance..normally it will take 1 min for 20 files(for example) after implementing threading ...there is only 5 or 3 sec difference., sometimes the same. here i will show the code for 1 thread(so it is for other3 threads) //main thread myClassObject->thread1 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction1,myClassObject); .... HANDLE threadHandles[4]; threadHandles[0] = myClassObject->thread1->m_hThread; .... WaitForSingleObject(myClassObject->thread1->m_hThread,INFINITE); UINT MyThreadFunction(LPARAM lparam) { CMerger* myClassObject = (CMerger*)lparam; CString outputPath = myClassObject->compressedFilePath.GetAt(0);//contains the o/p path wchar_t* compressInputData[] = {myClassObject->thread1outPath, COMPRESS,(wchar_t*)(LPCTSTR)(outputPath)}; HINSTANCE loadmyDll; loadmydll = LoadLibrary(myClassObject->thread1outPath); fp_Decompress callCompressAction = NULL; int getCompressResult=0; myClassObject->MyCompressFunction(compressInputData,loadClient7zdll,callCompressAction,myClassObject->thread1outPath, getCompressResult,minIndex,myClassObject->firstThread,myClassObject); return 0; }

    Read the article

  • in haskell, why do I need to specify type constraints, why can't the compiler figure them out?

    - by Steve
    Consider the function, add a b = a + b This works: *Main> add 1 2 3 However, if I add a type signature specifying that I want to add things of the same type: add :: a -> a -> a add a b = a + b I get an error: test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add a b = a + b So GHC clearly can deduce that I need the Num type constraint, since it just told me: add :: Num a => a -> a -> a add a b = a + b Works. Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator? In C++ template programming, you can do this easily: #include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; } The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num? Thank you.

    Read the article

  • Writing a program which uses voice recogniton... where should I start?

    - by Katsideswide
    Hello! I'm a design student currently dabbling with Arduino code (based on c/c++) and flash AS3. What I want to do is to be able to write a program with a voice control input. So, program prompts user to spell a word. The user spells out the word. The program recognizes if this is right, adds one to a score if it's correct, and corrects the user if it's wrong. So I'm seeing a big list of words, each with an audio file of the word being read out, with the voice recognition part checking to see if the reply matches the input. Ideally i'd like to be able to interface this with an Arduino microcontroller so that a physical output with a motor could be achieved in reaction also. Thing is i'm not sure if I can make this program in flash, in Processing (associated with arduino) or if I need another CS3 program-making-program. I guess I need to download a good voice recognizing program, but how can I interface this with anything else? Also, I'm on a mac. (not sure if this makes a difference) I apologize for my cluelessness, any hints would be great! -Susan

    Read the article

  • PHP Multi-Domain Sessions; ini_set Not Working?

    - by SumWon
    Hello, I'm trying to set it up so if you log in to my website the session carries over to all sub-domains of my website. For example, if you go to domain.com and log in, then go to sub.domain.com, you'll already be logged in at sub.domain.com. To my understanding, you would want to use ini_set('session.cookie_domain','.domain.com') and then session_start(), then set your session variables, but this isn't working. Example of what I'm doing: Code for domain.com: <?php ini_set('session.cookie_domain','.domain.com'); session_start(); $_SESSION['variable'] = 1; ?> Code for sub.domain.com: <?php session_start(); echo $_SESSION['variable']; ?> But $_SESSION['variable'] isn't set. I've also tried using ini_set() in the sub.domain.com code, but it made no difference. I've verified that setting session.cookie_domain is working by using ini_get(). What am I doing wrong? Thanks!

    Read the article

  • live image edit , and crop

    - by 422
    I was just thinking, which is always dangerous. We use Valums Image uploader. Aside from that, I am looking for an inline image editor, but with a difference. User uploads an image ( lets say 800 x 600 ) Our system wants to see the image ( 170 x 32 ) now I know we can use php to resize images. But I was thinking, does anyone know of a system, where we can display the image, and user can scale, and crop image ( with say a predefined overlay ) By that they scale down to nearest acceptable size, and then click crop tool, which shows a div overlay with say 70% transparency that they can drag over the image, and then click crop. So image is cropped to exact size we need, then can save . I am sure I have seen some jquery stuff done like this, just cannot for life of me find it. Essentialy, we would like to offer a simple client side image processor, thats lightweight, and then the ability to save what they did . Sorry no code to show, as its more of a request. Regards

    Read the article

  • NHibernate lazy properties behavior?

    - by GeReV
    I've been trying to get NHibernate into development for a project I'm working on at my workplace. Since I have to put a strong emphasis on performance, I've been running a proof-of-concept stress test on an existing project's table with thousands of records, all of which contain a large text column. However, when selecting a collection of these records, the select statement takes a relatively long time to execute; apparently due to the aforementioned column. The first solution that comes to mind is setting this property as lazy: <property name="Content" lazy="true"/> But there seems to be no difference in the SQL generated by NHibernate. My question is, how do lazy properties behave in NHibernate? Is there some kind of type limitations I could be missing? Should I take a different approach altogether? Using HQL's new Class(column1, column2) approach works, but lazy properties sounds like a simpler solution. It's perhaps worth mentioning I'm using NHibernate 2.1.2GA with the Castle DynamicProxy. Thanks!

    Read the article

  • how are association, aggregation and composition written?

    - by ajsie
    i have read some posts about the differences between these 3 relationships and i think i get the point. i just wonder, are all these written the same when coding? question 1: all 3 are just a value of the object type in a instance variable? class A { public $b = '' public function __construct($object) { $this->b = $object // <-- could be a association, aggregation or a composition relation? } } question 2: does it have to be an instance variable or can it be a static one? class A { public static $b = '' // <-- nothing changed? public function __construct($object) { $this->b = $object } } question 3: is there a difference in where the object is created? i tend to think that composition object is created inside the object: class A { public $b = '' public function __construct() { $this->b = new Object // is created inside the object } } and aggregation/association is passed through a constructor or another method: class A { public $b = '' public function __construct($object) { // passed through a method $this->b = $object } } question 4: why/when is this important to know. do i have to comment an object inside another what relation its about or do you do it in an UML diagram? could someone shed a light on these questions. thanks!

    Read the article

  • ARC and __unsafe_unretained

    - by J Shapiro
    I think I have a pretty good understanding of ARC and the proper use cases for selecting an appropriate lifetime qualifiers (__strong, __weak, __unsafe_unretained, and __autoreleasing). However, in my testing, I've found one example that doesn't make sense to me. As I understand it, both __weak and __unsafe_unretained do not add a retain count. Therefore, if there are no other __strong pointers to the object, it is instantly deallocated. The only difference in this process is that __weak pointers are set to nil, and __unsafe_unretained pointers are left alone. If I create a __weak pointer to a simple, custom object (composed of one NSString property), I see the expected (null) value when trying to access a property: Test * __weak myTest = [[Test alloc] init]; myTest.myVal = @"Hi!"; NSLog(@"Value: %@", myTest.myVal); // Prints Value: (null) Similarly, I would expect the __unsafe_unretained lifetime qualifier to cause a crash, due to the resulting dangling pointer. However, it doesn't. In this next test, I see the actual value: Test * __unsafe_unretained myTest = [[Test alloc] init]; myTest.myVal = @"Hi!"; NSLog(@"Value: %@", myTest.myVal); // Prints Value: Hi! Why doesn't the __unsafe_unretained object become deallocated?

    Read the article

  • Why is no encoding set in reponse by tomcat? How can I deal with it?

    - by Dishayloo
    I had recently a problem with encoding of websites generated by servlet, that occured if the servlets were deployed under tomcat, but not under jetty. I did a little bit of research about it and simplified the problem to the following servlet: public class TestServlet extends HttpServlet implements Servlet { @Override public void service(HttpServletRequest request, HttpServletResponse response) throws IOException { response.setContentType("text/plain"); Writer output = response.getWriter(); output.write("öäüÖÄÜß"); output.flush(); output.close(); } } If I deploy this under Jetty and direct the browser to it, it returns the expected result. The data is returned as ISO-8859-1 and if I take a look into the headers, then Jetty returns: Content-Type: text/plain; charset=iso-8859-1 The browser detects the encoding from this header. If I deploy the same servlet in a tomcat, the browser shows up strange characters. But Tomcat also returns the data as ISO-8859-1, the difference is, that no header tells about it. So the browser has to guess the encoding, and that goes wrong. My question is, is that behaviour of tomcat correct or a bug? And if it is correct, how can I avoid this problem? Sure, I can always add response.setCharacterEncoding("UTF-8"); to the servlet, but that means I set a fixed encoding, that the browser might or might not understand. The problem is more relevant, if no browser but another service accesses the servlet. So how I should deal with the problem in the most flexible way?

    Read the article

  • Velocity CTP: can we 'search' for objects?

    - by Stato Machino
    It appears that 'tags' allow us to associate a 'search term' with the objects placed into the Velocity cache space. However, these can only be queried within a 'region'. Further, regions somehow limit the locality of objects in the cache to a single server (or maybe something kinda like that). So this appears to make it hard to perform any operation for which the unique Id of the cached item is not persisted or continuously available to the application that stores and retrieves objects to and from the cache. In any case, I can't see an easy way to 'cleanse' the cache of objects or to find objects across the entire cache that may share some prefix, postfix or infix values in the cache key so that i can clear out the cache of object repeatedly created in unit tests, for example. And I am unsure about the consequences of regions being associated with single server cache locations. So I would appreciate any help with the following questions: What is the difference between a 'distributed cache' (called a 'partitioned' cache??) when using regions, and a 'local cache'? 1.a. In particular, are the region-oriented values in a distributed cache visible through a cache factory that is configured to 'see' the entire cache space? Are the operations of creating and removing 'regions' efficient enough that it would be reasonable to create a region and a group of tags for each bundle of objects that need to be cached? 2.a. Or does this just push the problem of scoping the 'search for objects' up the chain because the ability of the DataCache object to query down through regions and tags as limited as querying for the cache keys of objects themselves. Thanks, Stato

    Read the article

  • Is `super` local variable?

    - by Michael
    // A : Parent @implementation A -(id) init { // change self here then return it } @end A A *a = [[A alloc] init]; a. Just wondering, if self is a local variable or global? If it's local then what is the point of self = [super init] in init? I can successfully define some local variable and use like this, why would I need to assign it to self. -(id) init { id tmp = [super init]; if(tmp != nil) { //do stuff } return tmp; } b. If [super init] returns some other object instance and I have to overwrite self then I will not be able to access A's methods any more, since it will be completely new object? Am I right? c. super and self pointing to the same memory and the major difference between them is method lookup order. Am I right? sorry, don't have Mac to try, learning theory as for now...

    Read the article

  • Template access of symbol in unnamed namespace

    - by Fred Larson
    We are upgrading our XL C/C++ compiler from V8.0 to V10.1 and found some code that is now giving us an error, even though it compiled under V8.0. Here's a minimal example: test.h: #include <iostream> #include <string> template <class T> void f() { std::cout << TEST << std::endl; } test.cpp: #include <string> #include "test.h" namespace { std::string TEST = "test"; } int main() { f<int>(); return 0; } Under V10.1, we get the following error: "test.h", line 7.16: 1540-0274 (S) The name lookup for "TEST" did not find a declaration. "test.cpp", line 6.15: 1540-1303 (I) "std::string TEST" is not visible. "test.h", line 5.6: 1540-0700 (I) The previous message was produced while processing "f<int>()". "test.cpp", line 11.3: 1540-0700 (I) The previous message was produced while processing "main()". We found a similar difference between g++ 3.3.2 and 4.3.2. I also found in g++, if I move the #include "test.h" to be after the unnamed namespace declaration, the compile error goes away. So here's my question: what does the Standard say about this? When a template is instantiated, is that instance considered to be declared at the point where the template itself was declared, or is the standard not that clear on this point? I did some looking though the n2461.pdf draft, but didn't really come up with anything definitive.

    Read the article

  • GIT: head has dissapeared, want to merge it into master.

    - by samgoody
    The top image is the output of: git reflog. The bottom is what GITK in GIT GUI (msysgit) shows me when I look at all branch history. The last few commits do not show on GIT GUI. Why do they not show on GITK (at least as a branch or something)? How do I merge them into master? I gather this happened when I checked out tag 0.42. Why is that not the same as master? (I had tagged the master in its latest state) When I click push, why does the remote repo claim to be up to date.. shouldn't it try to update these commits into whatever branch they are in? The first of the questions is important - I would like to begin to understand what GIT is thinking. It's more oracle than logic at this point. If it makes a difference to see the earlier history, the project is a [pretty powerful] JS color picker that can be viewed here in its entirety.

    Read the article

  • Color banding only on Android 4.0+

    - by threeshinyapples
    On emulators running Android 4.0 or 4.0.3, I am seeing horrible colour banding which I can't seem to get rid of. On every other Android version I have tested, gradients look smooth. I have a SurfaceView which is configured as RGBX_8888, and the banding is not present in the rendered canvas. If I manually dither the image by overlaying a noise pattern at the end of rendering I can make the gradients smooth again, though obviously at a cost to performance which I'd rather avoid. So the banding is being introduced later. I can only assume that, on 4.0+, my SurfaceView is being quantized to a lower bit-depth at some point between it being drawn and being displayed, and I can see from a screen capture that gradients are stepping 8 values at a time in each channel, suggesting a quantization to 555 (not 565). I added the following to my Activity onCreate function, but it made no difference. getWindow().setFormat(PixelFormat.RGBA_8888); getWindow().addFlags(WindowManager.LayoutParams.FLAG_DITHER); I also tried putting the above in onAttachedToWindow() instead, but there was still no change. (I believe that RGBA_8888 is the default window format anyway for 2.2 and above, so it's little surprise that explicitly setting that format has no effect on 4.0+.) Which leaves the question, if the source is 8888 and the destination is 8888, what is introducing the quantization/banding and why does it only appear on 4.0+? Very puzzling. I wonder if anyone can shed some light?

    Read the article

  • Cookies not working for password-protected Pages on WordPress

    - by KaOSoFt
    Initially I had the issue reported in this question. Now, what I noticed is that there are some browsers that accept the password, and there are some which don't. Difference? For some reason the cookie is generated when I log in into the Administration module, but it isn't when I write down the password to access the page, forcing it to simply reload. I can see the cookie created for the log-in, but I can see none for the password-protected Page. These happens on Internet Explorer, both version 7 and 8; only on some machines, though, but most of them fail this. I already tried white-listing the URL, and even letting it accept ALL cookies, to no avail. What may be the cause? If perhaps it's got something to do with question above, please help me! Thanks in advance. PS: If you know of another, cookie-free method to make a simple authentication, please link me to it. Thanks. Oh, and by the way, this is inside an Intranet with static, class C IPs.

    Read the article

  • HTML form never calls Javascript

    - by user1205577
    I have a set of radio buttons defined in HTML like so: <input type="radio" name="group" id="rad1" value="rad1" onClick="doStuff(this.id)">Option 1<br> <input type="radio" name="group" id="rad2" value="rad2" onClick="doStuff(this.id)">Option 2<br> Just before the </body> tag, I have the following JavaScript behavior defined: <script type="text/javascript"> /*<![CDATA[*/ function doStuff(var id){ alert("Doing stuff!"); } /*]]>*/ </script> When I run the program, the page loads as expected in my browser window and the radio buttons allow me to click them. The doStuff() function, however, is never called (I validated this using breakpoints as well). I also tried the following just to see if inline made the difference, but it seems the JavaScript is never called at all: <script type="text/javascript"> /*<![CDATA[*/ alert("JavaScript called!"); main(); function main(){ var group = document.getElementsByName('group'); for(var i=0; i<group.length; i++){ group[i].onclick = function(){ doStuff(this.id); }; } } /*]]>*/ </script> My question is, is there something special I need to do using HTML and JavaScript in this context? My question is not whether I should be using inline function calls or whether this is your favorite way to write code.

    Read the article

  • Socket Performance C++ Or C#

    - by modernzombie
    I have to write an application that is essentially a proxy server to handle all HTTP and HTTPS requests from our server (web browsing, etc). I know very little C++ and am very comfortable writing the application features in C#. I have experimented with the proxy from Mentalis (socket proxy) which seems to work fine for small webpages but if I go to large sites like tigerdirect.ca and browse through a couple of layers it is very slow and sometimes requests don't complete and I see broken images and javascript errors. This happens with all of our vendor sites and other content heavy sites. Mentalis uses HTTP 1.0 which I know is not as efficient but should a proxy be that slow? What is an acceptable amount of performance loss from using a proxy? Would HTTP 1.1 make a noticeable difference? Would a C++ proxy be much faster than one in C#? Is the Mentalis code just not efficient? Would I be able to use a premade C++ proxy and import the DLL to C# and still get good performance or would this project call for all C++? Sorry if these are obvious questions but I have not done network programming before.

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >