Search Results

Search found 10548 results on 422 pages for 'standard deviation'.

Page 385/422 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Dropdown OnSelectedIndexChanged not firing

    - by Jim
    The OnSelectedIndexChanged event is not firing for my dropdown box. All forums I have looked at told me to add the AutoPostBack="true", but that didn't change the results. HTML: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Label ID="Label1" runat="server" Text="Current Time: " /> <br /> <asp:Label ID="lblCurrent" runat="server" Text="Label" /><br /><br /> <asp:DropDownList ID="cboSelectedLocation" runat="server" AutoPostBack="true" OnSelectedIndexChanged="cboSelectedLocation_SelectedIndexChanged" /><br /><br /> <asp:Label ID="lblSelectedTime" runat="server" Text="Label" /> </div> </form> </body> </html> Code behind: public partial class _Default : System.Web.UI.Page { string _sLocation = string.Empty; string _sCurrentLoc = string.Empty; TimeSpan _tsSelectedTime; protected void Page_Load(object sender, EventArgs e) { AddTimeZones(); cboSelectedLocation.Focus(); lblCurrent.Text = "Currently in " + _sCurrentLoc + Environment.NewLine + DateTime.Now; lblSelectedTime.Text = _sLocation + ":" + Environment.NewLine + DateTime.UtcNow.Add(_tsSelectedTime); } //adds all timezone displaynames to combobox //defaults combo location to seoul, South Korea //defaults current location to current location private void AddTimeZones() { foreach(TimeZoneInfo tz in System.TimeZoneInfo.GetSystemTimeZones()) { string s = tz.DisplayName; cboSelectedLocation.Items.Add(s); if (tz.StandardName == "Korea Standard Time") cboSelectedLocation.Text = s; if (tz.StandardName == System.TimeZone.CurrentTimeZone.StandardName) _sCurrentLoc = tz.StandardName; } } //changes timezone name and time depending on what is selected in the cbobox. protected void cboSelectedLocation_SelectedIndexChanged(object sender, EventArgs e) { foreach (TimeZoneInfo tz in System.TimeZoneInfo.GetSystemTimeZones()) { if (cboSelectedLocation.Text == tz.DisplayName) { _sLocation = tz.StandardName; _tsSelectedTime = tz.GetUtcOffset(DateTime.UtcNow); } } } } Any advice into what to look at for a rookie asp coder? EDIT: added more code behind

    Read the article

  • How to create a Uri instance parsed with GenericUriParserOptions.DontCompressPath

    - by Andrew Arnott
    When the .NET System.Uri class parses strings it performs some normalization on the input, such as lower-casing the scheme and hostname. It also trims trailing periods from each path segment. This latter feature is fatal to OpenID applications because some OpenIDs (like those issued from Yahoo) include base64 encoded path segments which may end with a period. How can I disable this period-trimming behavior of the Uri class? Registering my own scheme using UriParser.Register with a parser initialized with GenericUriParserOptions.DontCompressPath avoids the period trimming, and some other operations that are also undesirable for OpenID. But I cannot register a new parser for existing schemes like HTTP and HTTPS, which I must do for OpenIDs. Another approach I tried was registering my own new scheme, and programming the custom parser to change the scheme back to the standard HTTP(s) schemes as part of parsing: public class MyUriParser : GenericUriParser { private string actualScheme; public MyUriParser(string actualScheme) : base(GenericUriParserOptions.DontCompressPath) { this.actualScheme = actualScheme.ToLowerInvariant(); } protected override string GetComponents(Uri uri, UriComponents components, UriFormat format) { string result = base.GetComponents(uri, components, format); // Substitute our actual desired scheme in the string if it's in there. if ((components & UriComponents.Scheme) != 0) { string registeredScheme = base.GetComponents(uri, UriComponents.Scheme, format); result = this.actualScheme + result.Substring(registeredScheme.Length); } return result; } } class Program { static void Main(string[] args) { UriParser.Register(new MyUriParser("http"), "httpx", 80); UriParser.Register(new MyUriParser("https"), "httpsx", 443); Uri z = new Uri("httpsx://me.yahoo.com/b./c.#adf"); var req = (HttpWebRequest)WebRequest.Create(z); req.GetResponse(); } } This actually almost works. The Uri instance reports https instead of httpsx everywhere -- except the Uri.Scheme property itself. That's a problem when you pass this Uri instance to the HttpWebRequest to send a request to this address. Apparently it checks the Scheme property and doesn't recognize it as 'https' because it just sends plaintext to the 443 port instead of SSL. I'm happy for any solution that: Preserves trailing periods in path segments in Uri.Path Includes these periods in outgoing HTTP requests. Ideally works with under ASP.NET medium trust (but not absolutely necessary).

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • Is the design notion of layers contrived?

    - by Bruce
    Hi all I'm reading through Eric Evans' awesome work, Domain-Driven Design. However, I can't help feeling that the 'layers' model is contrived. To expand on that statement, it seems as if it tries to shoe-horn various concepts into a specific, neat model, that of layers talking to each other. It seems to me that the layers model is too simplified to actually capture the way that (good) software works. To expand further: Evans says: "Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below. Follow standard architectural patterns to provide loose coupling to the layers above." Maybe I'm misunderstanding what 'depends' means, but as far as I can see, it can either mean a) Class X (in the UI for example) has a reference to a concrete class Y (in the main application) or b) Class X has a reference to a class Y-ish object providing class Y-ish services (ie a reference held as an interface). If it means (a), then this is clearly a bad thing, since it defeats re-using the UI as a front-end to some other application that provides Y-ish functionality. But if it means (b), then how is the UI any more dependent on the application, than the application is dependent on the UI? Both are decoupled from each other as much as they can be while still talking to each other. Evans' layer model of dependencies going one way seems too neat. First, isn't it more accurate to say that each area of the design provides a module that is pretty much an island to itself, and that ideally all communication is through interfaces, in a contract-driven/responsibility-driven paradigm? (ie, the 'dependency only on lower layers' is contrived). Likewise with the domain layer talking to the database - the domain layer is as decoupled (through DAO etc) from the database as the database is from the domain layer. Neither is dependent on the other, both can be swapped out. Second, the idea of a conceptual straight line (as in from one layer to the next) is artificial - isn't there more a network of intercommunicating but separate modules, including external services, utility services and so on, branching off at different angles? Thanks all - hoping that your responses can clarify my understanding on this..

    Read the article

  • UIImagePickerController Memory Leak

    - by Watson
    I am seeing a huge memory leak when using UIImagePickerController in my iPhone app. I am using standard code from the apple documents to implement the control: UIImagePickerController* imagePickerController = [[UIImagePickerController alloc] init]; imagePickerController.delegate = self; if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { switch (buttonIndex) { case 0: imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:imagePickerController animated:YES]; break; case 1: imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; [self presentModalViewController:imagePickerController animated:YES]; break; default: break; } } And for the cancel: -(void) imagePickerControllerDidCancel:(UIImagePickerController *)picker { [[picker parentViewController] dismissModalViewControllerAnimated: YES]; [picker release]; } The didFinishPickingMediaWithInfo callback is just as stanard, although I do not even have to pick anything to cause the leak. Here is what I see in instruments when all I do is open the UIImagePickerController, pick photo library, and press cancel, repeatedly. As you can see the memory keeps growing, and eventually this causes my iPhone app to slow down tremendously. As you can see I opened the image picker 24 times, and each time it malloc'd 128kb which was never released. Basically 3mb out of my total 6mb is never released. This memory stays leaked no matter what I do. Even after navigating away from the current controller, is remains the same. I have also implemented the picker control as a singleton with the same results. Here is what I see when I drill down into those two lines: Any help here would be greatly appreciated! Again, I do not even have to choose an image. All I do is present the controller, and press cancel. Update 1 I downloaded and ran apple's example of using the UIIMagePickerController and I see the same leak happening there when running instruments (both in simulator and on the phone). http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html%23//apple_ref/doc/uid/DTS40010196 All you have to do is hit the photo library button and hit cancel over and over, you'll see the memory keep growing. Any ideas? Update 2 I only see this problem when viewing the photo library. I can choose take photo, and open and close that one over and over, without a leak.

    Read the article

  • Django conditional template inheritance

    - by Ed
    I have template that displays object elements with hyperlinks to other parts of my site. I have another function that displays past versions of the same object. In this display, I don't want the hyperlinks. I'm under the assumption that I can't dynamically switch off the hyperlinks, so I've included both versions in the same template. I use an if statement to either display the hyperlinked version or the plain text version. I prefer to keep them in the same template because if I need to change the format of one, it will be easy to apply it to the other right there. The template extends framework.html. Framework has a breadcrumb system and it extends base.html. Base has a simple top menu system. So here's my dilemma. When viewing the standard hyperlink data, I want to see the top menu and the breadcrumbs. But when viewing the past version plain text data, I only want the data, no menu, no breadcrumbs. I'm unsure if this is possible given my current design. I tried having framework inherit the primary template so that I could choose to call either framework (and display the breadcrumbs), or the template itself, thus skipping the breadcrumbs, but I want framework.html available for other templates as well. If framework.html extends a specific template, I lose the ability to display it in other templates. I tried writing an if statement that would display a the top_menu block and the nav_menu block from base.html and framework.html respectively. This would overwrite their blocks and allow me to turn off those elements conditional on the if. Unfortunately, it doesn't appear to be conditional; if the block elements are in the template at all, surrounded by an if or not, I lose the menus. I thought about using {% include %} to pick up the breadcrumbs and a split out top menu. In that case though, I'll have to include it all the time. No more inheritance. Is this the best option given my requirement?

    Read the article

  • stdio's remove() not always deleting on time.

    - by Kyte
    For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7. Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen(). FILE *archivo, *antiguo; remove("IndiceNecesidades.old"); // This randomly fails to work in time. rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails. antiguo = fopen("IndiceNecesidades.old", "rb"); // But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done). archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped. Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally. The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine. // I'm yet to see this snippet fail. FILE *antiguo, *archivo; remove("Necesidades.old"); rename("Necesidades.dat", "Necesidades.old"); antiguo = fopen("Necesidades.old", "rb"); archivo = fopen("Necesidades.dat", "wb"); Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

    Read the article

  • Wildcard searching and highlighting with Solr 1.4

    - by andy
    Hey guys, I've got a pretty much vanilla install of SOLR 1.4 apart from a few small config and schema changes. <requestHandler name="standard" class="solr.SearchHandler" default="true"> <!-- default values for query parameters --> <lst name="defaults"> <str name="defType">dismax</str> <str name="echoParams">explicit</str> <str name="qf"> text </str> <str name="spellcheck.dictionary">default</str> <str name="spellcheck.onlyMorePopular">false</str> <str name="spellcheck.extendedResults">false</str> <str name="spellcheck.count">1</str> </lst> </requestHandler> The main field type I'm using for Indexing is this: <fieldType name="textNoHTML" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <charFilter class="solr.HTMLStripCharFilterFactory" /> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/> </analyzer> </fieldType> now, when I perform a search using "q=search+term&hl=on" I get highlighting, and nice accurate scores. BUT, for wildcard, I'm assuming you need to use "q.alt"? Is that true? If so my query looks like this: "q.alt=search*&hl=on" When I use the above query, highlighting doesn't work, and all the scores are "1.0". What am I doing wrong? is what I want possible without bypassing some of the really cool SOLR optimizations. cheers!

    Read the article

  • Why is my javascript function sometimes "not defined"?

    - by harpo
    Problem: I call my javascript function, and sometimes I get the error 'myFunction is not defined'. But it is defined. For example. I'll occasionally get 'copyArray is not defined' even in this example: function copyArray( pa ) { var la = []; for (var i=0; i < pa.length; i++) la.push( pa[i] ); return la; } Function.prototype.bind = function( po ) { var __method = this; var __args = []; // sometimes errors -- in practice I inline the function as a workaround __args = copyArray( arguments ); return function() { /* bind logic omitted for brevity */ } } As you can see, copyArray is defined right there, so this can't be about the order in which script files load. I've been getting this in situations that are harder to work around, where the calling function is located in another file that should be loaded after the called function. But this was the simplest case I could present, and appears to be the same problem. It doesn't happen 100% of the time, so I do suspect some kind of load-timing-related problem. But I have no idea what. @Hojou: That's part of the problem. The function in which I'm now getting this error is itself my addLoadEvent, which is basically a standard version of the common library function. @James: I understand that, and there is no syntax error in the function. When that is the case, the syntax error is reported as well. In this case, I am getting only the 'not defined' error. @David: The script in this case resides in an external file that is referenced using the normal <script src="file.js"></script> method in the page's head section. @Douglas: Interesting idea, but if this were the case, how could we ever call a user-defined function with confidence? In any event, I tried this and it didn't work. @sk: This technique has been tested across browsers and is basically copied from the prototype library.

    Read the article

  • How to remove lowercase sentence fragments from text?

    - by Aaron
    Hello: I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner. These are commonly referred to as speech or attribution tags, for example - he said, she said, etc. This example shows before and after using manual deletion: Original: "Ah, that's perfectly true!" exclaimed Alyosha. "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" cried the girl by the window, suddenly turning to her father with a disdainful and contemptuous air. "Wait a little, Varvara!" cried her father, speaking peremptorily but looking at them quite approvingly. "That's her character," he said, addressing Alyosha again. "Where have you been?" he asked him. "I think," he said, "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," said he. All lower case sentence fragments manually removed: "Ah, that's perfectly true!" "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" "Wait a little, Varvara!" "That's her character," "Where have you been?" "I think," "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," I've changed straight quotes " to balanced and tried: ” (...)+[.] Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression. I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated. Cheers, Aaron

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to work?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • WCF. Https on basicHttpBinding

    - by Andrew Kalashnikov
    Hello, colleagues. I've written wcf service. Unfortunality I have to use basicHttpBinding for php callers. But I need securtiy. So I've decided use transport security with https. I host it at my IIS 6.0. I enable ssl at IIS and assign Certificate. But when I try open it through browser i get standard error. What's wrong. Please help. I can't issue that for several hours. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="https://192.168.0.8/MyService.svc" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors>

    Read the article

  • What is the best workaround for the WCF client `using` block issue?

    - by Eric King
    I like instantiating my WCF service clients within a using block as it's pretty much the standard way to use resources that implement IDisposable: using (var client = new SomeWCFServiceClient()) { //Do something with the client } But, as noted in this MSDN article, wrapping a WCF client in a using block could mask any errors that result in the client being left in a faulted state (like a timeout or communication problem). Long story short, when Dispose() is called, the client's Close() method fires, but throws and error because it's in a faulted state. The original exception is then masked by the second exception. Not good. The suggested workaround in the MSDN article is to completely avoid using a using block, and to instead instantiate your clients and use them something like this: try { ... client.Close(); } catch (CommunicationException e) { ... client.Abort(); } catch (TimeoutException e) { ... client.Abort(); } catch (Exception e) { ... client.Abort(); throw; } Compared to the using block, I think that's ugly. And a lot of code to write each time you need a client. Luckily, I found a few other workarounds, such as this one on IServiceOriented. You start with: public delegate void UseServiceDelegate<T>(T proxy); public static class Service<T> { public static ChannelFactory<T> _channelFactory = new ChannelFactory<T>(""); public static void Use(UseServiceDelegate<T> codeBlock) { IClientChannel proxy = (IClientChannel)_channelFactory.CreateChannel(); bool success = false; try { codeBlock((T)proxy); proxy.Close(); success = true; } finally { if (!success) { proxy.Abort(); } } } } Which then allows: Service<IOrderService>.Use(orderService => { orderService.PlaceOrder(request); } That's not bad, but I don't think it's as expressive and easily understandable as the using block. The workaround I'm currently trying to use I first read about on blog.davidbarret.net. Basically you override the client's Dispose() method wherever you use it. Something like: public partial class SomeWCFServiceClient : IDisposable { void IDisposable.Dispose() { if (this.State == CommunicationState.Faulted) { this.Abort(); } else { this.Close(); } } } This appears to be able to allow the using block again without the danger of masking a faulted state exception. So, are there any other gotchas I have to look out for using these workarounds? Has anybody come up with anything better?

    Read the article

  • TFS CM resource recommendations / some questions

    - by John
    I am working with a small development shop that consists of a group of 5 developers and 1 QA person. We are using TFS and need to get more sophisticated on how we use this tool. Currently the development team checks in their code each evening. A nightly build runs and pushes the output out on a network share. Our QA person uses this build for testing the next day. Sometimes the build off the trunk codebase has issues/bugs that hinder the QA process, and it hasn’t been a giant issue in the past, but we now want to get to a state where we have our QA person testing on a stable QA build. So I believe we need to create a branch (call it QA), and the developers will continue to develop off the trunk, but the QA person will use builds created from code in the QA branch. Seems simple enough, but we have started doing code reviews as well. So we have another desire in that only code that has been code reviewed can be promoted to the QA branch. Each developer works off a TFS item, and when they check in a changeset, they do it against a TFS item which creates a link between a checked in code file and a TFS item. Eventually the TFS item becomes complete and ready for code review. All code attached to the TFS item is reviewed. How can the versions of these files get promoted to the QA branch? In the QA branch, if a bug is found, we want to fix it in the QA branch and have the changes migrated back to the trunk. I believe TFS has a way to automatically do this doesn’t it? Long story short, we want to get to a build and CM environment that I believe is pretty standard, but we are unaware of how to make this happen with TFS. Given our situation above, can someone point out a book or website(s) that would address our specific needs? We would like to make this happen without having to get too deep in CM theory or TFS. I very much appreciate any and all suggestions! Thanks, John

    Read the article

  • Network communication for a turn based board game

    - by randooom
    Hi all, my first question here, so please don't be to harsh if something went wrong :) I'm currently a CS student (from Germany, if this info is of any use ;) ) and we got a, free selectable, programming assignment, which we have to write in a C++/CLI Windows Forms Application. My team, two others and me, decided to go for a network-compatible port of the board game Risk. We divided the work in 3 Parts, namely UI, game logic and network. Now we're on the part where we have to get everything working together and the big question mark is, how to get the clients synchronized with each other? Our approach so far is, that each client has all information necessary to calculate and/or execute all possible actions. Actually the clients have all information available at all, aside from the game-initializing phase (add players, select map, etc.), which needs one "super-client" with some extra stuff to control things. This is the standard scenario of our approach: player performs action, the action is valid and got executed on the players client action is sent over the network action is executed on the other clients The design (i.e. no or code so far) we came up with so far, is something like the following pseudo sequence diagram. Gui, Controller and Network implement all possible actions (i.e. all actions which change data) as methods from an interface. So each part can implement the method in a way to get their job done. Example with Action(): On the player side's Client: Player-->Gui.Action() Gui-->Controller.Action() Controller-->Logic.Action (Logic.Action() == NoError)? Controller-->Network.Action() Network-->Parser.ParseAction() Network.Send(msg) On all other clients: Network.Recv(msg) Network-->Parser.Deparse(msg) Parser-->Logic.Action() Logic-->Gui.Action() The questions: Is this a viable approach to our task? Any better/easier way to this? Recommendations, critique? Our knowledge (so you can better target your answer): We are on the beginner side, in regards to programming on a somewhat larger projects with a small team. All of us have some general programming experience and basic understanding of the .Net Libraries and Windows Forms. If you need any further information, please feel free to ask.

    Read the article

  • How to run an application as root without asking for an admin password?

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • Html Agility Pack: DescendantsOrSelf() not returning HTML element

    - by Program.X
    I have some HTML, eg: <%@ Page Title="About Us" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="ContentManagedTargetPage.aspx.cs" Inherits="xxx.ContentManagedTargetPage" %> <%@ Register TagPrefix="CxCMS" Namespace="xxx.ContentManagement.ASPNET.UI" Assembly="xxx.ContentManagement.ASPNET" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Content Managed </h2> <p> Put content here. [<CxCMS:ContentManagedPlaceHolder Key="keyThingy" runat="server" />] </p> </asp:Content> And I want to find all the instances of the CxCMS:ContentManagedPlaceHolder element. I'm using HTML Agility Pack, which seems the best fit. However, despite looking at the [meagre] documentation, I can't get my code to work. I would expect the following to work: string searchForElement = "CxCMS:ContentManagedPlaceHolder"; IEnumerable<HtmlNode> contentPlaceHolderHtmlNodes = HtmlDocument.DocumentNode.Descendants(searchForElement); int count = contentPlaceHolderHtmlNodes.Count(); But I get nothing back. If I change to DescendantsOrSelf, I get the document node back, "#document" - which is incorrect: string searchForElement = "CxCMS:ContentManagedPlaceHolder"; IEnumerable<HtmlNode> contentPlaceHolderHtmlNodes = HtmlDocument.DocumentNode.DescendantsOrSelf(searchForElement); int count = contentPlaceHolderHtmlNodes.Count(); I also tried using LINQ: string searchForElement = "CxCMS:ContentManagedPlaceHolder"; IEnumerable<HtmlNode> contentPlaceHolderHtmlNodes = HtmlDocument.DocumentNode.DescendantsOrSelf().Where(q=>q.Name==searchForElement); int count = contentPlaceHolderHtmlNodes.Count(); As neither of these methods work, I moved onto using SelectNodes, instead: string searchForElement = "CxCMS:ContentManagedPlaceHolder"; string xPath="//"+searchForElement // "//CxCMS:ContentManagedPlaceHolder" var nodes= HtmlDocument.DocumentNode.SelectNodes(xPath); This just throws the exception: "Namespace Manager or XsltContext needed. This query has a prefix, variable, or user-defined function.". I can't find any way of adding namespace management to the HtmlDocument object. What am I missing, here? The DescendantsOrSelf() method works if using a "standard" HTML tag, such as "p", but not the one I have. Surely it should work? (It needs to!)

    Read the article

  • Why am I seeing a crash when trying to call CDHtmlDialog::OnInitDialog()

    - by Tim
    I added a helpAbout menu item to my mfc app. I decided to make the ddlg derive from CDHTMLDialog. I override the OnInitDialog() method in my derived class and the first thing I do is call the parent's OnInitDialog() method. I then put in code that sets the title. On some machines this works fine, but on others it crashes in the call to CDHtmlDialog::OnInitDialog() - Trying to read a null pointer. the call stack has nothing useful - it is in mfc90.dll Is this a potential problem with mismatches of mfc/win32 dlls? It works on my vista machines but crashes on a win2003 server box. BOOL HTMLAboutDlg::OnInitDialog() { // CRASHES on the following line CDHtmlDialog::OnInitDialog(); CString title = "my title"; // example of setting title // i try to get version info //set the title CModuleVersion ver; char filename[ _MAX_PATH ]; GetModuleFileName( AfxGetApp()->m_hInstance, filename, _MAX_PATH ); ver.GetFileVersionInfo(filename); // get version from VS_FIXEDFILEINFO struct CString s; s.Format("Version: %d.%d.%d.%d\n", HIWORD(ver.dwFileVersionMS), LOWORD(ver.dwFileVersionMS), HIWORD(ver.dwFileVersionLS), LOWORD(ver.dwFileVersionLS)); CString version = ver.GetValue(_T("ProductVersion")); version.Remove(' '); version.Replace(",", "."); title = "MyApp - Version " + version; SetWindowText(title); return TRUE; // return TRUE unless you set the focus to a control } And here is the relevant header file: class HTMLAboutDlg : public CDHtmlDialog { DECLARE_DYNCREATE(HTMLAboutDlg) public: HTMLAboutDlg(CWnd* pParent = NULL); // standard constructor virtual ~HTMLAboutDlg(); // Overrides HRESULT OnButtonOK(IHTMLElement *pElement); HRESULT OnButtonCancel(IHTMLElement *pElement); // Dialog Data enum { IDD = IDD_DIALOG_ABOUT, IDH = IDR_HTML_HTMLABOUTDLG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support virtual BOOL OnInitDialog(); DECLARE_MESSAGE_MAP() DECLARE_DHTML_EVENT_MAP() }; I can't figure out what is going on - why it works on some machins and crashes on others. Both have VS2008 installed EDIT: VS versions VISTA - no crashes 9.0.30729.1 SP 2003 server: (crashes) 9.0.21022.8 RTM

    Read the article

  • quartz: preventing concurrent instances of a job in jobs.xml

    - by Jason S
    This should be really easy. I'm using Quartz running under Apache Tomcat 6.0.18, and I have a jobs.xml file which sets up my scheduled job that runs every minute. What I would like to do, is if the job is still running when the next trigger time rolls around, I don't want to start a new job, so I can let the old instance complete. Is there a way to specify this in jobs.xml (prevent concurrent instances)? If not, is there a way I can share access to an in-memory singleton within my application's Job implementation (is this through the JobExecutionContext?) so I can handle the concurrency myself? (and detect if a previous instance is running) update: After floundering around in the docs, here's a couple of approaches I am considering, but either don't know how to get them to work, or there are problems. Use StatefulJob. This prevents concurrent access... but I'm not sure what other side-effects would occur if I use it, also I want to avoid the following situation: Suppose trigger times would be every minute, i.e. trigger#0 = at time 0, trigger #1 = 60000msec, #2 = 120000, #3 = 180000, etc. and the trigger#0 at time 0 fires my job which takes 130000msec. With a plain Job, this would execute triggers #1 and #2 while job trigger #0 is still running. With a StatefulJob, this would execute triggers #1 and #2 in order, immediately after #0 finishes at 130000. I don't want that, I want #1 and #2 not to run and the next trigger that runs a job should take place at #3 (180000msec). So I still have to do something else with StatefulJob to get it to work the way I want, so I don't see much of an advantage to using it. Use a TriggerListener to return true from vetoJobExecution(). Although implementing the interface seems straightforward, I have to figure out how to setup one instance of a TriggerListener declaratively. Can't find the docs for the xml file. Use a static shared thread-safe object (e.g. a semaphore or whatever) owned by my class that implements Job. I don't like the idea of using singletons via the static keyword under Tomcat/Quartz, not sure if there are side effects. Also I really don't want them to be true singletons, just something that is associated with a particular job definition. Implement my own Trigger which extends SimpleTrigger and contains shared state that could run its own TriggerListener. Again, I don't know how to setup the XML file to use this trigger rather than the standard <trigger><simple>...</simple></trigger>.

    Read the article

  • How to modularize a JSF/Facelets/Spring application with OSGi?

    - by lexicore
    I'm working with very large JSF/Facelets applications which use Spring for DI/bean management. My applications have modular structure and I'm currently looking for approaches to standardize the modularization. My goal is to compose a web application from a number of modules (possibly depending on each other). Each module may contain the following: Classes; Static resources (images, CSS, scripts); Facelet templates; Managed beans - Spring application contexts, with request, session and application-scoped beans (alternative is JSF managed beans); Servlet API stuff - servlets, filters, listeners (this is optional). What I'd like to avoid (almost at all costs) is the need to copy or extract module resources (like Facelets templates) to the WAR or to extend the web.xml for module's servlets, filters, etc. It must be enough to add the module (JAR, bundle, artifact, ...) to the web application (WEB-INF/lib, bundles, plugins, ...) to extend the web application with this module. Currently I solve this task with a custom modularization solution which is heavily based on using classpath resources: Special resources servlet serves static resources from classpath resources (JARs). Special Facelets resource resolver allows loading Facelet templates from classpath resources. Spring loads application contexts via the pattern classpath*:com/acme/foo/module/applicationContext.xml - this loads application contexts defined in module JARs. Finally, a pair of delegating servlets and filters delegate request processing to the servlets and filters configured in Spring application contexts from modules. Last days I read a lot about OSGi and I was considering, how (and if) I could use OSGi as a standardized modularization approach. I was thinking about how individual tasks could be solved with OSGi: Static resources - OSGi bundles which want to export static resources register a ResourceLoader instances with the bundle context. A central ResourceServlet uses these resource loaders to load resources from bundles. Facelet templates - similar to above, a central ResourceResolver uses services registered by bundles. Managed beans - I have no idea how to use an expression like #{myBean.property} if myBean is defined in one of the bundles. Servlet API stuff - use something like WebExtender/Pax Web to register servlets, filters and so on. My questions are: Am I inventing a bicycle here? Are there standard solutions for that? I've found a mentioning of Spring Slices but could not find much documentation about it. Do you think OSGi is the right technology for the described task? Is my sketch of OSGI application more or less correct? How should managed beans (especially request/session scope) be handled? I'd be generally graefult for your comments.

    Read the article

  • TinyMCE is modifying the XHTML 1.0 Strict HTML I input. How can I stop it?

    - by Matt
    The code I want to have saved through TinyMCE is as follows: <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="550" height="90" id="homepage-banner"> <param name="movie" value="/images/header.swf" /> <param name="wmode" value="transparent" /> <!--[if !IE]>--> <object type="application/x-shockwave-flash" data="/images/header.swf" width="550" height="90"> <param name="wmode" value="transparent" /> <!--<![endif]--> <img src="/images/header.jpg" width="550" height="90" alt="" border="0" /> <!--[if !IE]>--> </object> <!--<![endif]--> </object> Sadly, what I end up with is: <object data="/images/header.swf" height="90" type="application/x-shockwave-flash" width="550"> <param name="id" value="homepage-banner" /> <param name="wmode" value="transparent" /> <param name="src" value="/images/header.swf" /> </object> The purpose of the stripped parts of the code is to provide a fallback image if flash is not available on the client. In my tinyMCE.init({ ... }); I am using verify_html: true and valid_elements is set as per this forum topic whereby all valid XHTML 1.0 Strict elements are allowed. I have checked and the above code does comply with the XHTML 1.0 Strict standard. I have tried just setting verify_html to false but it had no effect. How can TinyMCE be configured to leave my HTML alone?!

    Read the article

  • Can't create file in Ada 95

    - by duder
    Hello, I'm trying to follow a standard reference for opening files but running into a constraint_error at the line when I call Ada.Text_IO.Create(). It says "range check failed". Any help appreciated, here's the code: WITH Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Text_IO; USE Ada.Integer_Text_IO; PROCEDURE FileManip IS --Variables Start_Int : Integer; Stop_Int : Integer; Max_Length : Integer; --Output File MaxName : CONSTANT Positive := 80; SUBTYPE NameRange IS Positive RANGE 1..MaxName; OutFileName : String(NameRange) := (OTHERS => '#'); OutNameLength : NameRange; OutData : File_Type; --Array TYPE Chain_Array IS ARRAY(1..500) OF Integer; Sum : Integer := 1; BEGIN --Inputs Ada.Text_IO.Put(Item => "Enter a starting Integer: "); Ada.Integer_Text_IO.Get(Item => Start_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a stopping Integer: "); Ada.Integer_Text_IO.Get(Item => Stop_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a Maximum Length to search: "); Ada.Integer_Text_IO.Get(Item => Max_Length); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a output file name > "); Ada.Text_IO.Get_Line( Item => OutFileName, Last => OutNameLength); Ada.Text_IO.Create( File => OutData, Mode => Ada.Text_IO.Out_File, Name => OutFileName(1..OutNameLength)); Ada.Text_IO.New_Line;

    Read the article

  • How to get jQuery to work with Prototype

    - by thinkfuture
    Ok so here is the situation. Been pulling my hair out on this one. I'm a noob at this. Only been using rails for about 6 weeks. I'm using the standard setup package, and my code leverages prototype helpers heavily. Like I said, noob ;) So I'm trying to put in some jQuery effects, like PrettyPhoto. But what happens is that when the page is first loaded, PrettyPhoto works great. However, once someone uses a Prototype helper, like a link created with link_to_remote, Prettyphoto stops working. I've tried jRails, all of the fixes proposed on the JQuery site to stop conflicts... http://docs.jquery.com/Using_jQuery_with_Other_Libraries ...even done some crazy things likes renaming all of the $ in prototype.js to $$$ to no avail. Either the prototype helpers break, or jQuery breaks. Seems nothing I do can get these to work together. Any ideas? Here is part of my application.html.erb <%= javascript_include_tag 'application' %> <%= javascript_include_tag 'tooltip' %> <%= javascript_include_tag 'jquery' %> <%= javascript_include_tag 'jquery-ui' %> <%= javascript_include_tag "jquery.prettyPhoto" %> <%= javascript_include_tag 'prototype' %> <%= javascript_include_tag 'scriptalicious' %> </head> <body> <script type="text/javascript" charset="utf-8"> jQuery(document).ready( function() { jQuery("a[rel^='prettyPhoto']").prettyPhoto(); }); </script> If I put prototype before jquery, the prototype helpers don't work If I put the noconflict clause in, neither works. Thanks in advance! Chris BTW: when I try this, from the jQuery site: <script> jQuery.noConflict(); // Use jQuery via jQuery(...) jQuery(document).ready(function(){ jQuery("div").hide(); }); // Use Prototype with $(...), etc. $('someid').hide(); </script> my page disappears!

    Read the article

  • PHP-based LaTeX parser -- where to begin?

    - by Alex Basson
    The project: I want to build a LaTeX-to-MathML translator in PHP. Why? Because I'm a mathematician, and I want to publish math on my Drupal site. It doesn't have to translate all of LaTeX, since the basic document-level stuff is ably handled by the CMS and wouldn't be written in LaTeX to begin with; it just has to translate math written in LaTeX into math written in MathML. Although I feel as though I've done my due diligence, this doesn't seem to exist already. Maybe I'm wrong---if you know of something that would serve this purpose, by all means let me know, and thank you in advance. But assuming it doesn't exist, I guess I have to go write it myself. Here's the thing, though: I've never done anything this ambitious. I don't really know where to begin. I've used PHP for years, but just to do the standard "build a CMS with PHP and MySQL"-type of stuff. I've never attempted anything as seemingly sophisticated as translation from one language to another. I'm just dumb enough to consider doing it with regex---after all, LaTeX is a much more formal language, and it doesn't allow for nearly the kinds of pathological edge-cases, as say, HTML. But on the other hand, I'm just smart enough to realize this is probably a terrible idea: now I have two problems, and I sure don't want to end up like this guy. So if that's not the way to go (right?), what is? How should I start thinking about this problem? Am I essentially writing a LaTeX compiler in PHP, and if so, what do I need to know to do that (like, should I just go read the Purple Dragon book first?)? I'm both really excited and pretty intimidated by the prospect of this project, but hey, this is how we all learn to be programmers, right? If something we need doesn't exist, we go and build it, necessity is the mother of... you get the point. Tremendous thanks to everyone in advance for any and all guidance you can offer.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >