Search Results

Search found 6542 results on 262 pages for 'undocumented behavior'.

Page 51/262 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • howto hide outline on a form

    - by justjoe
    i have to design a form with an input inside it. i use background image on the input so it would look like a button. so every time somebody click it, then it would send $POST, a behavior i want to achieve. But the problem is about the outline around the form. The outline show when we click the form. It's minor, but it would be great to make the form (or input) lost it outline. i test it using Firefox 3.6 and flock. Both of them show the outline behavior that i want to avoid

    Read the article

  • Is lock returned by ReentrantReadWriteLock equivalent to it's read and write locks?

    - by Todd
    Hello, I have been looking around for the answer to this, but no joy. In Java, is using the lock created by ReentrantReadWriteLock equivalent to getting the read and write locks as returned by readLock.lock() and writeLock.lock()? In other words, can I expect the read and write locks associated with the ReentrantReadWriteLock to be requested and held by synchronizing on the ReentrantReadWriteLock? My gut says "no" since any object can be used for synchronization. I wouldn't think that there would be special behavior for ReentrantReadWriteLock. However, special behavior is the corner case of which I may not be aware. Thanks, Todd

    Read the article

  • Automatically Loading XIB for UITableViewController

    - by ACBurk
    Ran into something interesting, want to know if I'm doing something wrong or if this is the correct behavior. I have a custom UITableViewController. I ASSUMED (first mistake) that if you initialize as such: [[CustomTableController alloc] init]; it would automatically load from a XIB of the same name, CustomTableController.xib, if it is in the same directory and such. HOWEVER This does not work; doesn't load the XIB. BUT, if I change the parent class of my controller from 'UITableViewController' to 'UIViewController', EVERYHTING WORKS FINE! Calling: [[CustomTableController alloc] init]; loads the controller and view from my xib. Am I doing something wrong? Is this a bug? Expected behavior?

    Read the article

  • Referring to the type of an inner class in Scala

    - by saucisson
    The following code tries to mimic Polymorphic Embedding of DSLs: rather than giving the behavior in Inner, it is encoded in the useInner method of its enclosing class. I added the enclosing method so that user has only to keep a reference to Inner instances, but can always get their enclosing instance. By doing this, all Inner instances from a specific Outer instance are bound to only one behavior (but it is wanted here). abstract class Outer { sealed class Inner { def enclosing = Outer.this } def useInner(x:Inner) : Boolean } def toBoolean(x:Outer#Inner) : Boolean = x.enclosing.useInner(x) It does not compile and scala 2.8 complains about: type mismatch; found: sandbox.Outer#Inner required: _81.Inner where val _81:sandbox.Outer From Programming Scala: Nested classes and A Tour of Scala: Inner Classes, it seems to me that the problem is that useInnerexpects as argument an Inner instance from a specific Outer instance. What is the true explanation and how to solve this problem ?

    Read the article

  • Load vs Get in Nhibernate

    - by Quintin Par
    The master page in my web application does authentication and loads up the user entity using a Get. After this whenever the user object is needed by the usercontrols or any other class I do a Load. Normally nhibernate is supposed to load the object from cache or return the persistent loaded object whenever Load of called. But this is not the behavior shown by my web application. NHprof always shows the sql whenever Load is called. How do I verify the correct behavior of Load? I use the S#arp architecture framework.

    Read the article

  • Which Win32 API reports the Format preference in the Region and Language control panel?

    - by Integer Poet
    Windows 7 and Windows Vista have a Region and Language control panel which contains a Formats tab which contains a popup menu titled Format. This menu allows the user to select from among many language-oriented sets of number, currency, time, and date formatting preferences regardless of the language of the base system. For example, I could decide I prefer the default currency symbol to be Japanese yen on a US English system. The Windows Contacts application changes its behavior depending on these format preferences. For example, if I select Japanese formatting preferences, Windows Contacts displays and lets me edit phonetic names (AKA "ruby", "yomi", and "furigana") but not middle names. If I select US English formatting preferences, Windows Contacts displays and lets me edit middle names but not phonetic names. I need to write code (native C calling Win32) which mirrors the behavior of the Windows Contacts application in this respect. Which API should I call?

    Read the article

  • In Android Browser link does not always execute onClick causing focus instead

    - by Artem
    I am trying to program a very standard JS behavior for a link using an HREF onClick handler, and I am facing a strange problem caused by what I believe to be focus/touch mode behavior on Android. Sometimes when I click on the link, instead of executing the action, it simply becomes selected/focused, with either just a focus rectangle or even also with a filled focus rectangle (selected as opposed to just focused?). The pseudo-code right now is <a href="#" onClick="toggleDivBelowToShowHide(); return false;">go</a> I have tried doing something like: <a href="#" onTouchStart="toggleDivBelowToShowHide(); return false;">go</a> But I still get the same pesky problem some of the time.

    Read the article

  • -sizeWithFont Functions Differently on Device

    - by LucasTizma
    So I am seemingly encountering some strange behavior when using NSString's -sizeWithFont family of method calls depending on whether or not I'm invoking it on the iPhone Simulator or an actual device. Simply enough, when the receiver of the -sizeWithFont method call is nil, the resulting CGSize passed back on the Simulator is {0, 0}. However, on the device, it is the size of the bounding rectangle I specified in the method call. See the following log statements: Simulator: someString: (null) someStringSize: {0, 0} Device: someString: (null) someStringSize: {185, 3.40282e+38} The behavior on the Simulator is what I would expect. Not that this issue is difficult to circumvent, but 1) I'm a little confused why this family of functions would behave differently on the Simulator and an actual device, and 2) why does calling a method on a nil receiving return a particular result? Thanks for any pointers or insight you guys can provide! EDIT: I suppose I should mention that I'm building against the 3.1 SDK.

    Read the article

  • strange Problem with WPF Textbox stringformat - Cursor moves back

    - by Emad
    I am using WPF 4.0 TextBox and binding. I am using StringFormat to format the number as currency. the XAML looks like this: <TextBox Text="{Binding Path=ValueProperty, ValidatesOnDataErrors=True, ValidatesOnExceptions=True, StringFormat={}{0:C}, UpdateSourceTrigger=PropertyChanged}"> </TextBox> Everything seems to work correctly except for a strange behavior: When for example a user types in 12: right after typing 1, the value in the textbox becomes $1.00 and the weird thing is the the cursor is moved to be between the $ and the 1. So when a user simply types in 12, the result becomes $21.00. How can I fix this strange behavior?

    Read the article

  • C++ Program Always Crashes While doing a std::string assign

    - by bbazso
    I have been trying to debug a crash in my application that crashes (i.e. asserts a * glibc detected free(): invalid pointer: 0x000000000070f0c0 **) while I'm trying to do a simple assign to a string. Note that I'm compiling on a linux system with gcc 4.2.4 with an optimization level set to -O2. With -O0 the application no longer crashes. E.g. std::string abc; abc = "testString"; but if I changed the code as follows it no longer crashes std::string abc("testString"); So again I scratched my head! But the interesting pattern was that the crash moved later on in the application, AGAIN at another string. I found it weird that the application was continuously crashing on a string assign. A typical crash backtrace would look as follows: #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #1 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #2 0x00000000004d8cb7 in people_streamingserver_sighandler (signum=6) at src/peoplestreamingserver.cpp:487 #3 <signal handler called> #4 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #5 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #6 0x00007f2c26680ce0 in ?? () from /lib64/libc.so.6 #7 0x00007f2c270ca7a0 in std::string::assign (this=0x7f2c21bc8d20, __str=<value optimized out>) at /home/bbazso/ThirdParty/sources/gcc-4.2.4/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/basic_string.h:238 #8 0x00007f2c21bd874a in PEOPLESProtocol::GetStreamName (this=<value optimized out>, pRawPath=0x2342fd8 "rtmp://127.0.0.1/mp4:pop.mp4", lStreamName=@0x7f2c21bc8d20) at /opt/trx-HEAD/gcc/4.2.4/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/basic_string.h:491 #9 0x00007f2c21bd9daa in PEOPLESProtocol::SignalProtocolCreated (pProtocol=0x233a4e0, customParameters=@0x7f2c21bc8de0) at peoplestreamer/src/peoplesprotocol.cpp:240 This was really weird behavior and so I started to poke around further in my application to see if there was some sort of memory corruption (either heap or stack) error that could be occurring that could be causing this weird behavior. I even checked for ptr corruptions and came up empty handed. In addition to visual inspection of the code I also tried the following tools: Valgrind using both memcheck and exp-ptrcheck electric fence libsafe I compiled with -fstack-protector-all in gcc I tried MALLOC_CHECK_ set to 2 I ran my code through lint checks as well as cppcheck (to check for mistakes) And I stepped through the code using gdb So I tried a lot of stuff and still came up empty handed. So I was wondering if it could be something like a linker issue or a library issue of some sort that could be causing this problem. Are there any know issues with the std::string that make is susceptible to crashing in -O2 or maybe it has nothing to do with the optimization level? But the only pattern that I can see thus far in my problem is that it always seems to crash on a string and so I was wondering if anyone knew of any issues that my be causing this type of behavior. Thanks a lot!

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • How not to include a Thumb in ListBoxItem selection

    - by Elad
    I have a collection of items. I am presenting the items in a WPF ListBox using DataTemplate. Part of the DataTemplate is a Thumb which is used for resizing and visually separating between items. The Thumb.Visibility is binded to a property of the item (for example, the last item doesn’t have a visible Thumb). The problem is that selecting an item in the ListBox selects the Thumb as well, as it is part of the ListBoxItem. The desired behavior is to select only the data without the Thumb. How is it possible to get this behavior? I don’t want to add items by code the ListBox or to handle the visibility of the Thumb manually. Currently I get all this from the DataTemplate.

    Read the article

  • Why the current working directory changes when use the Open file dialog in XP?

    - by RRUZ
    I have found an strange behavior when use the open file dialog in c#. If use this code in Windows XP the current working directory changes to the path of the selected file, however if you run this code in windows 7 the current working directory does not change. private void button1_Click(object sender, EventArgs e) { MessageBox.Show(string.Format("Current Directory {0}",Directory.GetCurrentDirectory()), "My Application",MessageBoxButtons.OK, MessageBoxIcon.Asterisk); DialogResult result = openFileDialog1.ShowDialog(); // Show the dialog and get result. if (result == DialogResult.OK) { } MessageBox.Show(string.Format("Current Directory {0}", Directory.GetCurrentDirectory()), "My Application", MessageBoxButtons.OK, MessageBoxIcon.Asterisk); } Anybody know the reason for this behavior? Why the current directoiry changes in XP and not in windows 7?

    Read the article

  • WCF. BasicHttpBinding Certificates.

    - by Andrew Kalashnikov
    Hello colleagues. I've got some problems. I've created WCF service with basicHttpBinding and hosted by IIS 6.0. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerOrChainTrust" revocationMode="NoCheck"/> </clientCertificate> <serviceCertificate storeLocation="LocalMachine" storeName="My" findValue="CN=Server" /> </serviceCredentials> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> Also I have cert authority on this server and I issue certs for server and client. I server cert at server and client cert at client. When I try consume service from client I get famous: "Could not establish trust relationship for the SSL/TLS secure channel with authority" All sites recommend override ServicePointManager.ServerCertificateValidationCallback by set return value to true. Bu I want decide this issue other right way. My client config: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientBehavior"> <clientCredentials> <serviceCertificate> <authentication certificateValidationMode="ChainTrust" revocationMode="NoCheck"/> </serviceCertificate> <clientCertificate findValue="CN=PharmPortal" storeLocation="LocalMachine" storeName="My"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IRegistrator" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> </security> </binding> </basicHttpBinding> </bindings> <client > <endpoint address="https://aurit-server2/Registrator.svc" binding="basicHttpBinding" behaviorConfiguration="ClientBehavior" bindingConfiguration="BasicHttpBinding_IRegistrator" contract="ServiceReference1.IRegistrator" name="BasicHttpBinding_IRegistrator" > <identity> <dns value="Server" /> </identity> </endpoint> </client> </system.serviceModel> I set up client certificate. Why i get error?

    Read the article

  • Getting "on the wire" Size of Messages in WCF

    - by Mystagogue
    While I'm making SOAP or REST invocations to WCF, I'd like to have the channel stack on either end (client and server) record the on-the-wire size of the data received. So I'm guessing I need to add a custom behavior to the channel stack on either side. That is, on the server side I'd record the IP-header advertised size that was received. On the client side I'd record the IP-header advertised size that was returned from the server. But this presupposes that this information is visible to a custom WCF behavior at the channel stack level. Perhaps it is only visible at the level of ASP.NET (at a layer beneath WCF)? In short, does anyone have any further insight on if and how this information is accessible? I must qualify that this "size" data will be collected in a production environment, as part of regular business logic calls. This question is related to my earlier bandwidth question.

    Read the article

  • Question on Win32 LogonUser API and the Logon Type

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior?

    Read the article

  • How to close an oracle db connection from php on an apache server? I mean close instantly.

    - by Valentin Jacquemin
    Usually closing a connection is simply done by oci_clone($connection); or in a worse case when the php script ends the connection pass away. In my case however, I face a different behavior. If I access my application which uses PHP 5.2.8, Apache 2.2.11 and oci8 1.2.5, the connection is kept during several minutes. Actually it seems to: if I launch netstat -b I see that the process httpd.exe remains with the ESTABLISHED status on the database's URL during a while (a few minutes). Could someone enlighten me on that behavior? P.S. I do not use persistent connections.

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • Why would an ASP.NET site become veeeeeery slow after the network connection dropped?

    - by Joon
    I have an ASP.NET 3.5 site published in IIS 7.5 on Windows Server 2008 R2 64 bit. The pages are accessed over SSL One of our testers has determined that if, during a postback, he blocks network access on his PC, and then after a few seconds reconnects, our site becomes excruciatingly slow. Like 30 seconds per page load. If he hits the refresh button in his browser it stays slow. If he closes the tab, then re-opens it, it becomes fast again. This behavior happens with both IE 8 and the latest firefox. There are no event log entries on the server when this happens My question: - Has anyone seen this same behavior? - Does anyone have a theory as to what causes it?

    Read the article

  • If you delete a DOM element, do any events that started with that element continue to bubble?

    - by Matt
    What behavior should I expect if I delete a DOM element that was used to start an event bubble, or whose child started the event bubble - will it continue to bubble if the element is removed? For example - lets say you have a table, and want to detect click events on the table cells. Another piece of JS has executed an AJAX request that will eventually replace the table, in full, once the request is complete. What happens if I click the table, and immediately after the table gets replaced by a successful completion of an AJAX request? I ask because I am seeing some behavior where the click events don't seem to be bubbling - but it is hard to duplicate. I am watching the event on a parent element of the table (instead of attaching the event to every TD), and it just doesn't seem to reach it sometimes.

    Read the article

  • If I specify a System property multiple times when invoking JVM which value is used?

    - by RobV
    If I specify a system property multiple times when invoking the JVM which value will I actually get when I retrieve the property? e.g. java -Dprop=A -Dprop=B -jar my.jar What will be the result when I call System.getProperty("prop");? The Java documentation on this does not really tell me anything useful on this front. In my non-scientific testing on a couple of machines running different JVMs it seems like the last value is the one returned (which is actually the behavior I need) but I wondered if this behavior is actually defined officially anywhere or can it vary between JVMs?

    Read the article

  • browser back acts on nested iframe before the page itself - is there a way to avoid it??

    - by kfiroo
    hi, i have a page with dynamic data loaded by some ajax and lots of javascript. the page contains a list from which the user can choose and each selected value loads new data to the page. one of these data items is a url provided to an iframe. i use jQuery BBQ: Back Button & Query Library to simulate the browser-back behavior. all works well besides the fact that when i click the back button for the first time the iframe goes back to its previous location and then i need to click back again to make the page go back. is there a way to disable the iframe's back behavior?

    Read the article

  • Editing Multiple files in vi with Wildcards

    - by Alan Storm
    When using the programmers text editor vi, I'll often using a wildcard search to be lazy about the file I want to edit vi ThisIsAReallLongFi*.txt When this matches a single file it works great. However, if it matches multiple files vi does something weird. First, it opens the first file for editing Second, when I :wq out of the file, I get a message the bottom of the terminal that looks like this E173: 4 more files to edit Hit ENTER or type command to continue When I hit enter, it returns me to edit mode in the file I was just in. The behavior I'd expect here would be that vi would move on to the next file to edit. So, What's the logic behind vi's behavior here Is there a way to move on and edit the next file that's been matched? And yes, I know about tab completion, this question is based on curiosity and wanting to understand the shell better.

    Read the article

  • Django: problem with merging querysets after annotation

    - by Björn Lilja
    Hi I have a manager for "Dialog" looking like this: class AnnotationManager(models.Manager): def get_query_set(self): return super(AnnotationManager, self).get_query_set().annotate( num_votes=Count('vote', distinct=True), num_comments=Count('comment', distinct=True), num_commentators = Count('comment__user', distinct=True), ) Votes and Comments has a ForeignKey to Dialog. Comments has a ForeignKey to User. When I do this: dialogs_queryset = Dialog.public.filter(organization=organization) dialogs_popularity = dialogs_queryset.exclude(num_comments=0) | dialogs_queryset.exclude(num_votes=0) ...dialogs_popularity will never returned the combination, but only the dialogs with more than 0 comments, or if I change the order of the OR, the dialogs with more than 0 votes! To me, the expected behavior would be to get the dialogs with more than 0 votes AND the dialogs with more than 0 comments. What am I missing? Or is there a bug in the annotation behavior here?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >