Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 103/414 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • howto hide outline on a form

    - by justjoe
    i have to design a form with an input inside it. i use background image on the input so it would look like a button. so every time somebody click it, then it would send $POST, a behavior i want to achieve. But the problem is about the outline around the form. The outline show when we click the form. It's minor, but it would be great to make the form (or input) lost it outline. i test it using Firefox 3.6 and flock. Both of them show the outline behavior that i want to avoid

    Read the article

  • Is lock returned by ReentrantReadWriteLock equivalent to it's read and write locks?

    - by Todd
    Hello, I have been looking around for the answer to this, but no joy. In Java, is using the lock created by ReentrantReadWriteLock equivalent to getting the read and write locks as returned by readLock.lock() and writeLock.lock()? In other words, can I expect the read and write locks associated with the ReentrantReadWriteLock to be requested and held by synchronizing on the ReentrantReadWriteLock? My gut says "no" since any object can be used for synchronization. I wouldn't think that there would be special behavior for ReentrantReadWriteLock. However, special behavior is the corner case of which I may not be aware. Thanks, Todd

    Read the article

  • Referring to the type of an inner class in Scala

    - by saucisson
    The following code tries to mimic Polymorphic Embedding of DSLs: rather than giving the behavior in Inner, it is encoded in the useInner method of its enclosing class. I added the enclosing method so that user has only to keep a reference to Inner instances, but can always get their enclosing instance. By doing this, all Inner instances from a specific Outer instance are bound to only one behavior (but it is wanted here). abstract class Outer { sealed class Inner { def enclosing = Outer.this } def useInner(x:Inner) : Boolean } def toBoolean(x:Outer#Inner) : Boolean = x.enclosing.useInner(x) It does not compile and scala 2.8 complains about: type mismatch; found: sandbox.Outer#Inner required: _81.Inner where val _81:sandbox.Outer From Programming Scala: Nested classes and A Tour of Scala: Inner Classes, it seems to me that the problem is that useInnerexpects as argument an Inner instance from a specific Outer instance. What is the true explanation and how to solve this problem ?

    Read the article

  • Automatically Loading XIB for UITableViewController

    - by ACBurk
    Ran into something interesting, want to know if I'm doing something wrong or if this is the correct behavior. I have a custom UITableViewController. I ASSUMED (first mistake) that if you initialize as such: [[CustomTableController alloc] init]; it would automatically load from a XIB of the same name, CustomTableController.xib, if it is in the same directory and such. HOWEVER This does not work; doesn't load the XIB. BUT, if I change the parent class of my controller from 'UITableViewController' to 'UIViewController', EVERYHTING WORKS FINE! Calling: [[CustomTableController alloc] init]; loads the controller and view from my xib. Am I doing something wrong? Is this a bug? Expected behavior?

    Read the article

  • XML: When to use attributes instead of child nodes?

    - by Rosarch
    For tree leaves in XML, when is it better to use attributes, and when is it better to use descendant nodes? For example, in the following XML document: <?xml version="1.0" encoding="utf-8" ?> <savedGame> <links> <link rootTagName="zombies" packageName="zombie" /> <link rootTagName="ghosts" packageName="ghost" /> <link rootTagName="players" packageName="player" /> <link rootTagName="trees" packageName="tree" /> </links> <locations> <zombies> <zombie> <positionX>41</positionX> <positionY>100</positionY> </zombie> <zombie> <positionX>55</positionX> <positionY>56</positionY> </zombie> </zombies> <ghosts> <ghost> <positionX>11</positionX> <positionY>90</positionY> </ghost> </ghosts> </locations> </savedGame> The <link> tag has attributes, but it could also be written as: <link> <rootTagName>trees</rootTagName> <packageName>tree</packageName> </link> Similarly, the location tags could be written as: <zombie positionX="55" positionY="56" /> instead of: <zombie> <positionX>55</positionX> <positionY>56</positionY> </zombie> What reasons are there to prefer one over the other? Is it just a stylistic issue? Any performance considerations?

    Read the article

  • How to find NSOutlineView row index when using NSTreeController

    - by velocityb0y
    I'm using an NSTreeController to manage nodes for an NSOutlineView. When the user adds a new item, I create a new object and insert it: EntityViewEntityNode *newNode = [EntityViewEntityNode nodeWithName:@"New entity" entity:newObject]; // Insert at end of group // NSIndexPath *insertAt = [pathOfGroupNode indexPathByAddingIndex:[selected.children count]]; [entityCollectionTreeController insertObject:newNode atArrangedObjectIndexPath:insertAt]; Now I'd like to open the table column for edit so the user can name the new item. This seems logical: NSInteger row = [entityCollectionOutlineView rowForItem:newNode]; [entityCollectionOutlineView editColumn:0 row:row withEvent:nil select:YES]; However, row is always -1 indicating the object isn't found. Poking around reveals that the tree controller is not actually putting my objects directly in the tree, but is wrapping them in a node object of its own. Anyone have insight into how I would go about getting a row index relative to the outline view, so I can do this (without, hopefully, enumerating everything in the outline view and figuring out the mapping back to my node?)

    Read the article

  • Load vs Get in Nhibernate

    - by Quintin Par
    The master page in my web application does authentication and loads up the user entity using a Get. After this whenever the user object is needed by the usercontrols or any other class I do a Load. Normally nhibernate is supposed to load the object from cache or return the persistent loaded object whenever Load of called. But this is not the behavior shown by my web application. NHprof always shows the sql whenever Load is called. How do I verify the correct behavior of Load? I use the S#arp architecture framework.

    Read the article

  • -sizeWithFont Functions Differently on Device

    - by LucasTizma
    So I am seemingly encountering some strange behavior when using NSString's -sizeWithFont family of method calls depending on whether or not I'm invoking it on the iPhone Simulator or an actual device. Simply enough, when the receiver of the -sizeWithFont method call is nil, the resulting CGSize passed back on the Simulator is {0, 0}. However, on the device, it is the size of the bounding rectangle I specified in the method call. See the following log statements: Simulator: someString: (null) someStringSize: {0, 0} Device: someString: (null) someStringSize: {185, 3.40282e+38} The behavior on the Simulator is what I would expect. Not that this issue is difficult to circumvent, but 1) I'm a little confused why this family of functions would behave differently on the Simulator and an actual device, and 2) why does calling a method on a nil receiving return a particular result? Thanks for any pointers or insight you guys can provide! EDIT: I suppose I should mention that I'm building against the 3.1 SDK.

    Read the article

  • Which Win32 API reports the Format preference in the Region and Language control panel?

    - by Integer Poet
    Windows 7 and Windows Vista have a Region and Language control panel which contains a Formats tab which contains a popup menu titled Format. This menu allows the user to select from among many language-oriented sets of number, currency, time, and date formatting preferences regardless of the language of the base system. For example, I could decide I prefer the default currency symbol to be Japanese yen on a US English system. The Windows Contacts application changes its behavior depending on these format preferences. For example, if I select Japanese formatting preferences, Windows Contacts displays and lets me edit phonetic names (AKA "ruby", "yomi", and "furigana") but not middle names. If I select US English formatting preferences, Windows Contacts displays and lets me edit middle names but not phonetic names. I need to write code (native C calling Win32) which mirrors the behavior of the Windows Contacts application in this respect. Which API should I call?

    Read the article

  • C++ Unlocking a std::mutex before calling std::unique_lock wait

    - by Sant Kadog
    I have a multithreaded application (using std::thread) with a manager (class Tree) that executes some piece of code on different subtrees (embedded struct SubTree) in parallel. The basic idea is that each instance of SubTree has a deque that store objects. If the deque is empty, the thread waits until a new element is inserted in the deque or the termination criteria is reached. One subtree can generate objects and push them in the deque of another subtree. For convenience, all my std::mutex, std::locks and std::variable_condition are stored in a struct called "locks". The class Tree creates some threads that run the following method (first attempt) : void Tree::launch(SubTree & st, Locks & locks ) { /* some code */ std::lock_guard<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem is that "deque_lock" is still locked while the thread is waiting. Hence no object can be added in the deque of the current thread by a concurrent one. So I turned the lock_guard into a unique_lock and managed the lock/unlock manually : void launch(SubTree & st, Locks & locks ) { /* some code */ std::unique_lock<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { deque_lock.unlock() ; // unlock the access to the deque to enable the other threads to add objects // DATA RACE : nothing must happen to the unprotected deque here !!!!!! // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem now, is that there is a data race, and I would like to make sure that the "wait" instruction is performed directly after the "deque_lock.unlock()" one. Would anyone know a way to create such a critical instruction sequence with the standard library ? Thanks in advance.

    Read the article

  • strange Problem with WPF Textbox stringformat - Cursor moves back

    - by Emad
    I am using WPF 4.0 TextBox and binding. I am using StringFormat to format the number as currency. the XAML looks like this: <TextBox Text="{Binding Path=ValueProperty, ValidatesOnDataErrors=True, ValidatesOnExceptions=True, StringFormat={}{0:C}, UpdateSourceTrigger=PropertyChanged}"> </TextBox> Everything seems to work correctly except for a strange behavior: When for example a user types in 12: right after typing 1, the value in the textbox becomes $1.00 and the weird thing is the the cursor is moved to be between the $ and the 1. So when a user simply types in 12, the result becomes $21.00. How can I fix this strange behavior?

    Read the article

  • In Android Browser link does not always execute onClick causing focus instead

    - by Artem
    I am trying to program a very standard JS behavior for a link using an HREF onClick handler, and I am facing a strange problem caused by what I believe to be focus/touch mode behavior on Android. Sometimes when I click on the link, instead of executing the action, it simply becomes selected/focused, with either just a focus rectangle or even also with a filled focus rectangle (selected as opposed to just focused?). The pseudo-code right now is <a href="#" onClick="toggleDivBelowToShowHide(); return false;">go</a> I have tried doing something like: <a href="#" onTouchStart="toggleDivBelowToShowHide(); return false;">go</a> But I still get the same pesky problem some of the time.

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Best way to Fingerprint and Verify html structure.

    - by Lukas Šalkauskas
    Hello there, I just want to know what is your opinion about how to fingerprint/verify html/links structure. The problem I want to solve is: fingerprint for example 10 different sites, html pages. And after some time I want to have possibility to verify them, so is, if site has been changed, links changed, verification fails, othervise verification success. My base Idea is to analyze link structure by splitting it in some way, doing some kind of tree, and from that tree generate some kind of code. But I'm still in brainstorm stage, where I need to discuss this with someone, and know other ideas. So any ideas, algos, and suggestions would be usefull.

    Read the article

  • C++ Program Always Crashes While doing a std::string assign

    - by bbazso
    I have been trying to debug a crash in my application that crashes (i.e. asserts a * glibc detected free(): invalid pointer: 0x000000000070f0c0 **) while I'm trying to do a simple assign to a string. Note that I'm compiling on a linux system with gcc 4.2.4 with an optimization level set to -O2. With -O0 the application no longer crashes. E.g. std::string abc; abc = "testString"; but if I changed the code as follows it no longer crashes std::string abc("testString"); So again I scratched my head! But the interesting pattern was that the crash moved later on in the application, AGAIN at another string. I found it weird that the application was continuously crashing on a string assign. A typical crash backtrace would look as follows: #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #1 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #2 0x00000000004d8cb7 in people_streamingserver_sighandler (signum=6) at src/peoplestreamingserver.cpp:487 #3 <signal handler called> #4 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #5 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #6 0x00007f2c26680ce0 in ?? () from /lib64/libc.so.6 #7 0x00007f2c270ca7a0 in std::string::assign (this=0x7f2c21bc8d20, __str=<value optimized out>) at /home/bbazso/ThirdParty/sources/gcc-4.2.4/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/basic_string.h:238 #8 0x00007f2c21bd874a in PEOPLESProtocol::GetStreamName (this=<value optimized out>, pRawPath=0x2342fd8 "rtmp://127.0.0.1/mp4:pop.mp4", lStreamName=@0x7f2c21bc8d20) at /opt/trx-HEAD/gcc/4.2.4/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/basic_string.h:491 #9 0x00007f2c21bd9daa in PEOPLESProtocol::SignalProtocolCreated (pProtocol=0x233a4e0, customParameters=@0x7f2c21bc8de0) at peoplestreamer/src/peoplesprotocol.cpp:240 This was really weird behavior and so I started to poke around further in my application to see if there was some sort of memory corruption (either heap or stack) error that could be occurring that could be causing this weird behavior. I even checked for ptr corruptions and came up empty handed. In addition to visual inspection of the code I also tried the following tools: Valgrind using both memcheck and exp-ptrcheck electric fence libsafe I compiled with -fstack-protector-all in gcc I tried MALLOC_CHECK_ set to 2 I ran my code through lint checks as well as cppcheck (to check for mistakes) And I stepped through the code using gdb So I tried a lot of stuff and still came up empty handed. So I was wondering if it could be something like a linker issue or a library issue of some sort that could be causing this problem. Are there any know issues with the std::string that make is susceptible to crashing in -O2 or maybe it has nothing to do with the optimization level? But the only pattern that I can see thus far in my problem is that it always seems to crash on a string and so I was wondering if anyone knew of any issues that my be causing this type of behavior. Thanks a lot!

    Read the article

  • How to structure class to support imported 3d model ?

    - by brainydexter
    Hello, I've written a C++ library that reads in this 3d model file (collada DAE). uptil now, I would output a list of triangles and handle each at rendering stage. But now, I need to attach some Bounding sphere information with the imported model. I need some advice on how should I organize this in code. Here are some specs of the 3D file format: - 3D model is represented as a Tree consisting of nodes - each node can contain other nodes, geometry information, transformation etc My requirements: - a bounding sphere associated with each node, thereby yielding a tree of bounding sphere hierarchy for the model itself. - actual vertex information What would be the recommended way to deal with this situation? Thanks

    Read the article

  • How not to include a Thumb in ListBoxItem selection

    - by Elad
    I have a collection of items. I am presenting the items in a WPF ListBox using DataTemplate. Part of the DataTemplate is a Thumb which is used for resizing and visually separating between items. The Thumb.Visibility is binded to a property of the item (for example, the last item doesn’t have a visible Thumb). The problem is that selecting an item in the ListBox selects the Thumb as well, as it is part of the ListBoxItem. The desired behavior is to select only the data without the Thumb. How is it possible to get this behavior? I don’t want to add items by code the ListBox or to handle the visibility of the Thumb manually. Currently I get all this from the DataTemplate.

    Read the article

  • Difference between halo and mx namespace

    - by Andree
    Hi there ! As far as I know, the support for library://ns.adobe.com/flex/halo namespace has been dropped, and now we have to use library://ns.adobe.com/flex/mx instead (reference). Can someone provide if there's any difference between the two namespaces? I am just starting to learn Flex and this change make me confused. For example, if I have an <mx:Tree> tag in my mxml document, the compiler complains that <mx:Tree> could not be resolved to a component implementation. But if I change my mx namespace to use the old one instead (halo), it successfully compiled without error. Thanks. Andree Updated: By the way, I use Flex SDK command line compiler in Windows. mxmlc --version Version 4.0.0 build 10485

    Read the article

  • String of KML needs to be converted to java objects

    - by spartikus
    I have a string of kml coming in on a request object. I have used xjc to create the kml java objects. I am looking for an easy way to create the kml nested java objects from this string. I could parse the string and create each object in the tree by hand but wouldn't it be cool if there was a library or something that would create the java objects for me? Something like KmlType type = parseKML(mykmlStringFromTheRequest); Then type would be a Tree of kml objects. Thanks for the help all.

    Read the article

  • Why the current working directory changes when use the Open file dialog in XP?

    - by RRUZ
    I have found an strange behavior when use the open file dialog in c#. If use this code in Windows XP the current working directory changes to the path of the selected file, however if you run this code in windows 7 the current working directory does not change. private void button1_Click(object sender, EventArgs e) { MessageBox.Show(string.Format("Current Directory {0}",Directory.GetCurrentDirectory()), "My Application",MessageBoxButtons.OK, MessageBoxIcon.Asterisk); DialogResult result = openFileDialog1.ShowDialog(); // Show the dialog and get result. if (result == DialogResult.OK) { } MessageBox.Show(string.Format("Current Directory {0}", Directory.GetCurrentDirectory()), "My Application", MessageBoxButtons.OK, MessageBoxIcon.Asterisk); } Anybody know the reason for this behavior? Why the current directoiry changes in XP and not in windows 7?

    Read the article

  • python lxml problem

    - by David ???
    I'm trying to print/save a certain element's HTML from a web-page. I've retrieved the requested element's XPath from firebug. All I wish is to save this element to a file. I don't seem to succeed in doing so. (tried the XPath with and without a /text() at the end) I would appreciate any help, or past experience. 10x, David import urllib2,StringIO from lxml import etree url='http://www.tutiempo.net/en/Climate/Londres_Heathrow_Airport/12-2009/37720.htm' seite = urllib2.urlopen(url) html = seite.read() seite.close() parser = etree.HTMLParser() tree = etree.parse(StringIO.StringIO(html), parser) xpath = "/html/body/table/tbody/tr/td[2]/div/table/tbody/tr[6]/td/table/tbody/tr/td[3]/table/tbody/tr[3]/td/table/tbody/tr/td/table/tbody/tr/td/table/tbody/text()" elem = tree.xpath(xpath) print elem[0].strip().encode("utf-8")

    Read the article

  • Question on Win32 LogonUser API and the Logon Type

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior?

    Read the article

  • WCF. BasicHttpBinding Certificates.

    - by Andrew Kalashnikov
    Hello colleagues. I've got some problems. I've created WCF service with basicHttpBinding and hosted by IIS 6.0. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerOrChainTrust" revocationMode="NoCheck"/> </clientCertificate> <serviceCertificate storeLocation="LocalMachine" storeName="My" findValue="CN=Server" /> </serviceCredentials> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> Also I have cert authority on this server and I issue certs for server and client. I server cert at server and client cert at client. When I try consume service from client I get famous: "Could not establish trust relationship for the SSL/TLS secure channel with authority" All sites recommend override ServicePointManager.ServerCertificateValidationCallback by set return value to true. Bu I want decide this issue other right way. My client config: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientBehavior"> <clientCredentials> <serviceCertificate> <authentication certificateValidationMode="ChainTrust" revocationMode="NoCheck"/> </serviceCertificate> <clientCertificate findValue="CN=PharmPortal" storeLocation="LocalMachine" storeName="My"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IRegistrator" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> </security> </binding> </basicHttpBinding> </bindings> <client > <endpoint address="https://aurit-server2/Registrator.svc" binding="basicHttpBinding" behaviorConfiguration="ClientBehavior" bindingConfiguration="BasicHttpBinding_IRegistrator" contract="ServiceReference1.IRegistrator" name="BasicHttpBinding_IRegistrator" > <identity> <dns value="Server" /> </identity> </endpoint> </client> </system.serviceModel> I set up client certificate. Why i get error?

    Read the article

  • Getting "on the wire" Size of Messages in WCF

    - by Mystagogue
    While I'm making SOAP or REST invocations to WCF, I'd like to have the channel stack on either end (client and server) record the on-the-wire size of the data received. So I'm guessing I need to add a custom behavior to the channel stack on either side. That is, on the server side I'd record the IP-header advertised size that was received. On the client side I'd record the IP-header advertised size that was returned from the server. But this presupposes that this information is visible to a custom WCF behavior at the channel stack level. Perhaps it is only visible at the level of ASP.NET (at a layer beneath WCF)? In short, does anyone have any further insight on if and how this information is accessible? I must qualify that this "size" data will be collected in a production environment, as part of regular business logic calls. This question is related to my earlier bandwidth question.

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >