Search Results

Search found 12047 results on 482 pages for 'general debugging tidbits'.

Page 393/482 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How to write C++ audio processing applications?

    - by cesko82
    Hi everyone, I'm an Electronics and Telecommunications student, next to my graduation. I'm gonna work on a project that involves my knowledge about DSP, music and audio in general. I allready know all the basic mathematic instruments and all the stuff I need to manage it, such as FFT, circular convolution ecc ecc. I want to learn C++ programming basically for one reason: it's very important in the professional world!!! And I think it's one of the most used to write applications working with audio, especially when it's about real time processing. Ok, after this small introduction I would like to know first, which are the most used libraries to work with audio processing in c++?? I was longer looking on the web but i couldn't find a lo of working stuff. (I work under linux with eclipse CDT enviroment). Then I would like to know if there are good sources to learn how to write some working code, such as for example how to write a simple low pass filter. Basically now i will not write real time applications, I would like to start from the processing of a WAV file, or even better an MP3 file, so basically on vectors of samples. Let's say that basically for now I would like to extract the waveform from an audio file, and save it to a thumbnail or to a PNG image. Ok, for now I think it's all I would need. Any ideas, advices, libraries, books, interesting sources about that? Thanks a lot in advance for any kind of answer. Giovanni.

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • XPath with optional tbody element

    - by Phrogz
    As in this Stack Overflow answer imagine that you need to select a particular table and then all the rows of it. Due to the permissiveness of HTML, all three of the following are legal markup: <table id="foo"><tr>...</tr></table> <table id="foo"><tbody><tr>...</tr></tbody></table> <table id="foo"><tr>...</tr><tbody><tr>...</tr></tbody></table> You are worried about tables nested in tables, and so don't want to use an XPath like table[@id="foo"]//tr. If you could specify your desired XPath as a regex, it might look something like: table[@id="foo"](/tbody)?/tr In general, how can you specify an XPath expression that allows an optional element in the hierarchy of a selector? To be clear, I'm not trying to solve a real-world problem or select a specific element of a specific document. I'm asking for techniques to solve a class of problems.

    Read the article

  • Javascript scope chain

    - by Geromey
    Hi, I am trying to optimize my program. I think I understand the basics of closure. I am confused about the scope chain though. I know that in general you want a low scope (to access variables quickly). Say I have the following object: var my_object = (function(){ //private variables var a_private = 0; return{ //public //public variables a_public : 1, //public methods some_public : function(){ debugger; alert(this.a_public); alert(a_private); }; }; })(); My understanding is that if I am in the some_public method I can access the private variables faster than the public ones. Is this correct? My confusion comes with the scope level of this. When the code is stopped at debugger, firebug shows the public variable inside the this keyword. The this word is not inside a scope level. How fast is accessing this? Right now I am storing any this.properties as another local variable to avoid accessing it multiple times. Thanks very much!

    Read the article

  • Map large integer to a phrase

    - by Alexander Gladysh
    I have a large and "unique" integer (actually a SHA1 hash). I want (for no other reason than to have fun) to find an algorithm to convert that SHA1 hash to a (pseudo-)English phrase. The conversion should be reversible (i.e., knowing the algorithm, one must be able to convert the phrase back to SHA1 hash.) The possible usage of the generated phrase: the human readable version of Git commit ID, like a motto for a given program version (which is built from that commit). (As I said, this is "for fun". I don't claim that this is very practical — or be much more readable than the SHA1 itself.) A better algorithm would produce shorter, more natural-looking, more unique phrases. The phrase need not make sense. I would even settle for a whole paragraph of nonsense. (Though quality — englishness — of a paragraph should probably be better than for a mere phrase.) A variation: it is OK if I will be able to work only with a part of hash. Say, first six digits is OK. Possible approach: In the past I've attempted to build a probability table (of words), and generate phrases as Markov chains, seeding the generator (picking branches from probability tree), according to the bits I read from the SHA. This was not very successful, the resulting phrases were too long and ugly. I'm not sure if this was a bug, or the general flaw in the algorithm, since I had to abandon it early enough. Now I'm thinking about attempting to solve the problem once again. Any advice on how to approach this? Do you think Markov chain approach can work here? Something else?

    Read the article

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • Mercurial Hg Clone fails on C# project with GUID

    - by AnneTheAgile
    UPDATE: In trying to replicate this problem one more time to answer your questions I could not! I can only conclude that my initial setup of Mercurial was problematic and/or possibly I was trying to checkin a build that failed compilation before the checkin. Sigh! Thank you so very much for your help. I gave credit for the help on how to do a script. I need to try that for general purposes. hi all, I hope you can help me :). I am trying to see if Mercurial would be a good DCVS for my project at work, and I'm surely a newbie to many things. We have a fairly large codebase in C# (Dotnet3.0 not 3.5 , WindowsXP) and it utilizes the GUID feature. I confess to know little about how or why we use the GUID, but I do know that I cannot touch it. So, when I try hg clone, it fails unless I change the GUID in the cloned directory (ie create new GUID in Visual Studio and then paste that new GUID to replace the old one). To me, this completely defeats the purpose and utility of quick easy clones. It also makes difficult all the many workflows that require multiple clones. Is there a workaround, or is there something I'm doing wrong? How can I simplify and/or remove this problem? Would Bazaar make this easier? Thank you!

    Read the article

  • How can I do batch image processing with ImageJ in Java or clojure?

    - by Robert McIntyre
    I want to use ImageJ to do some processing of several thousand images. Is there a way to take any general imageJ plugin and apply it to hundreds of images automatically? For example, say I want to take my thousand images and apply a polar transformation to each--- A polar transformation plugin for ImageJ can be found here: http://rsbweb.nih.gov/ij/plugins/polar-transformer.html Great! Let's use it. From: [http://albert.rierol.net/imagej_programming_tutorials.html#How%20to%20automate%20an%20ImageJ%20dialog] I find that I can apply a plugin using the following: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" ""))) This is good because it suppresses the dialog which would otherwise pop up for every image. But running this always brings up a window containing the transformed image, when what I want is to simply return the transformed image. The stupidest way to do what I want is to just close the window that comes up and return the image which it was displaying. Does what I want but is absolutely retarded: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" "") (let [return-image (IJ/getImage)] (.hide return-image) return-image))) I'm obviously missing something about how to use imageJ plugins in a programming context. Does anyone know the right way to do this? Thanks, --Robert McIntyre

    Read the article

  • PHP (A few questions) OO, refactoring, eclipse

    - by jax
    I am using PHP in eclipse. It works ok, I can connect to my remote site, there is colour coding of code elements and some code hints. I realise this may be too long to answer all questions, if you have a good answer for one part, answering just that is ok. Firstly General Coding I have found that it is easy to loose track of included files and their variables. For example if there was a database $cursor it is difficult to remember or even know that it was declared in the included file (this becomes much worse the more files you include). How are people dealing with this? How are people documenting their code - in particular the required GET and POST data? Secondly OO Development: Should I be going full OO in my development. Currently I have a functions library which I can include and have separated each "task" into a separate file. It is a bit nasty but it works. If I go OO how do I structure the directories in PHP, java uses packages - what about php? How should I name my files, should I use all lower case with _ for spaces "hello_world.php"? Should I name classes with Uppercase like Java "HelloWorld.php"? Is there a different naming convention for Classes and regular function files? Thirdly Refactoring I must say this is a real pain. If I change the name of a variable in one place I have to go through whole document and each file that included this file and change the name their too. Of course, errors everywhere is what results. How are people dealing with this problem? In Java if you change the name in one place it changes everywhere. Are there any plugins to improve php refactoring? I am using the official PHP version of Eclipse from their website. thanks

    Read the article

  • Can I create template-based library objects in Dreamweaver CS5?

    - by Danjah
    At work we need two 'streams' of template. The first are general layout templates, like the ones already available in the MX through CS5 packages (except we'd have our own customised ones). The second are more granular objects, some of which are functional. In both cases, I don't want Jimmy to be able to wreak havoc inside anything other than the 'editable regions' which make up the templates. Now this is fine if I stick with the first scenario (layout templates) where there's simply a big chunk of editable region for good ole Jim to sprawl into, think of this as the 'body content' area. But I really do need these granular library (or snippet) objects to work in the same way. Unfortunately with my attempts so far they don't work as I'd have thought - perhaps for good reason? When I create a blank template and throw in my chunk of HTML (unobstrusive JS and external CSS use selectors in this HTML to provide style and function) and save it as a new library item or snippet, all looks well. Then I create a new document based on a layout template and save it as a plain html file (still all good so far). Next I drop in my custom library item... still all good... but then I go to save the document and it only allows me to save it as a new template! I expected it would just allow me to save it as HTML and have it simply respect the defined editable regions, as happens in the containing page 'body content' editable region. Apologies if that got specific and technical quite quickly, but it is quite particular. If you want some example files lemme know and I'll zip some up. Many thanks :) p.s It is not a requirement that library objects must somehow inject their dependency files into the newly created page - I already know what they'll be. Also, I know I must 'detatch from original' once I drop a library item into a document which then allows customisation of the library object.

    Read the article

  • How to properly implement cheat codes?

    - by Axarydax
    Hi, what would be the best way to implement kind of cheat codes in general? I have WinForms application in mind, where a cheat code would unlock an easter egg, but the implementation details are not relevant. The best approach that comes to my mind is to keep index for each code - let's consider famous DOOM codes - IDDQD and IDKFA, in a fictional C# app. string[] CheatCodes = { "IDDQD", "IDKFA"}; int[] CheatIndexes = { 0, 0 }; const int CHEAT_COUNT = 2; void KeyPress(char c) { for (int i = 0; i < CHEAT_COUNT; i++) //for each cheat code { if (CheatCodes[i][CheatIndexes[i]] == c) { //we have hit the next key in sequence if (++CheatIndexes[i] == CheatCodes[i].Length) //are we in the end? { //Do cheat work MessageBox.Show(CheatCodes[i]); //reset cheat index so we can enter it next time CheatIndexes[i] = 0; } } else //mistyped, reset cheat index CheatIndexes[i] = 0; } } Is this the right way to do it?

    Read the article

  • Replicating SQL's 'Join' in Python

    - by Daniel Mathews
    I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function. I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key': join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Join arrays r1 and r2 on key key. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html Any pointers or help will be most appreciated!

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Properly using Log4r in Ruby Application

    - by Spencer
    I must really be missing something obvious, but I'm having trouble with general use of Log4r in my Ruby application. I am able to log without issue, but the overhead seems clunky the way I have it setup. I'm basically passing the full path to a filename to log in each class in my application. The ruby script that is called pulls the log file from one of the arguments in ARGV which is then passed around and set in each class that I call in ruby. In each class I use the patternFormatter to insert the class/file name into the log statement. Is there a better way to make this work? It feels like no matter what I think of will require something to be passed to each class in my ruby application. I could set the log file in a yaml configuration file instead, but then I would be passing around the configuration file to each class as well. Any advice? If this doesn't make sense I could try and post some more specific code samples to further explain what I mean. Thanks!

    Read the article

  • How to set different width for INPUT and DIV elements with Scriptaculous Ajax.Autocompleter?

    - by Grzegorz Gierlik
    Hello, I am working on autocompleter box based on Scriptaculous Ajax.Autocompleter. Here how my HTML/JS code looks like: <input type="text" maxlength="255" class="input iSearchInput" name="isearch_value" id="isearch" value="<wl@txt>Search</wl@txt>" onfocus="this.select()"> <br> <div id='isearch_choices' class='iSearchChoices'></div> <script> function iSearchGetSelectedId(text, li) { console.log([text, li.innerHTML].join("\n")); document.location.href = li.getAttribute("url"); } document.observe('dom:loaded', function() { new Ajax.Autocompleter("isearch", "isearch_choices", "/url", { paramName: "phrase", minChars: 1, afterUpdateElement : iSearchGetSelectedId }); $("isearch_choices").setStyle({width: "320px"}); }); </script> and CSS classes: input.iSearchInput { width: 155px; height: 26px; margin-top: 7px; line-height: 20px; } div.iSearchChoices { position:absolute; background-color:white; border:1px solid #888; margin:0; padding:0; width: 320px; } It works in general, but I need to have list of choices wider then input box. My first try was to set various width with CSS classes (like above), but it didn't work -- list of choices became as wide as input box. According to Firebug width defined by my CSS class was overwritten by width set by element.style CSS class, which seems to be defined by Ajax.Autocompleter. My second try was to set width for list of choices after creating Ajax.Autocompleter $("isearch_choices").setStyle({width: "320px"}); but it didn't work too :(. No more ideas :(. How to set different width for list of choices for Scriptaculous Ajax.Autocompleter?

    Read the article

  • Suppress Eclipse compiler errors in in a plug-in

    - by Jan Gorzny
    Hi, I'm currently working on a plug-in for Eclipse that translates some custom Java code (which doesn't necessarily run/compile), to runnable Java code. In particular, the plug-in allows code to be written using classes created or imported during the translation. In general, the pre-translation code runs/compiles fine provided the writer uses import statements at the top of their class files. However, it would be convenient for my users if it was not necessary to import these classes. At the moment, the lack of import statements results in (obvious) compiler errors. Would it be possible to empower my plug-in to either a) suppress/ignore these errors, or b) have Eclipse find these classes automatically, without the use of import statements? I should point out that the translated code would these include the required import statements--but this is not a problem for me. I'm also aware that this could lead to lazy programmers and some bad habits. To clarify, consider the following example of pre-translated code: File f = new File("Somefilename.txt"); which clearly requires the possibly imported class File. Without an import statement (import java.io.File;), Eclipse reports that File can not be resolved to a type. This is the error I'd like to hide in files pertaining to projects created for use with my plug-in. (The translated code would include import java.io.File; so that it would be runnable) In closing, I should point out I'm not necessarily looking for code (though I wouldn't be opposed to it), but rather some links to some relevant tutorials (if they exist), or helpful tips/ideas. Also, as this is my first plug-in, it's entirely possible that what I'd like to do is not possible and that I don't realize it--if this is the case, please let me know, preferably with some justification. Thanks!

    Read the article

  • using Autofac in a multi-layered architecture

    - by Kamyar
    I'm fairly new to the DI/IoC concept and would like to use Autofac in a 3-layered ASP.NET Webforms application. UI layer: An ASP.NET webforms website. BLL: Business logic layer which calls the repositories on DAL. DAL: .EDMX file (Entity Model) and ObjectContext with Repository classes which abstract the CRUD operations for each entity. Entities: The POCO Entities. Persistence Ignorant. Generated by Microsoft's ADO.Net POCO Entity Generator. I have asked a more general question here. Basically, I'd like to create an obejctcontext per HttpContext in my DAL. But i don't want to add a reference to DAL in UI or access to HttpContext in DAL directly. I guess this is where IoC tools come to play. The answer to my previous question is a very good example of using Windsor Castle. I'd like to use Autofac as my IoC tool and Don't know how to achieve this. (How to access DAL in application_start to register the component while I don't want to reference it in my UI, what are the proper references to be able to use DAL component in BLL with Autofac, Should I register BLL as a component with Autofac too) Sorry folks for not providing an explicit question and requesting a kind of working example, But I'm very unfamiliar to the whole IoC concept and I don't think I can achieve it to use in my current time-limited project.

    Read the article

  • Loading and binding a serialized view model to a WPF window?

    - by generalt
    Hello all. I'm writing a one-window UI for a simple ETL tool. The UI consists of the window, the code behind for the window, a view model for the window, and the business logic. I wanted to provide functionality to the users to save the state of the UI because the content of about 10-12 text boxes will be reused between sessions, but are specific to the user. I figured I could serialize the view model, which contains all the data from the textboxes, and this works fine, but I'm having trouble loading the information in the serialized XML file back into the text boxes. Constructor of window: public ETLWindow() { InitializeComponent(); _viewModel = new ViewModel(); this.DataContext = _viewModel; _viewModel.State = Constants.STATE_IDLE; Loaded += new RoutedEventHandler(MainWindow_Loaded); } XAML: <TextBox x:Name="targetDirectory" IsReadOnly="true" Text="{Binding TargetDatabaseDirectory, UpdateSourceTrigger=PropertyChanged}"/> ViewModel corresponding property: private string _targetDatabaseDirectory; [XmlElement()] public string TargetDatabaseDirectory { get { return _targetDatabaseDirectory; } set { _targetDatabaseDirectory = value; OnPropertyChanged(DataUtilities.General.Utilities.GetPropertyName(() => new ViewModel().TargetDatabaseDirectory)); } Load event in code behind: private void loadState_Click(object sender, RoutedEventArgs e) { string statePath = this.getFilePath(); _viewModel = ViewModel.LoadModel(statePath); } As you can guess, the LoadModel method deserializes the serialized file on the user's drive. I couldn't find much on the web regarding this issue. I know this probably has something to do with my bindings. Is there some way to refresh on the bindings on the XAML after I deserialize the view model? Or perhaps refresh all properties on the view model? Or am I completely insane thinking any of this could be done? Thanks.

    Read the article

  • Excel COM Add-In dialog interrupts script

    - by usac
    Hi all! I have written an Excel COM Add-In in C++ for automation of Excel with VBA. It contains an own dialog showing some general informations about the Add-In. Now i create a button in Excel that opens the dialog. Leaving the dialog with the escape key leads to an Excel message that the script is being interrupted instead of just closing the dialog. I could suppress the interruption message with: Application.EnableCancelKey = xlDisabled But that seems not to be the solution as the script can not be interrupted any more. Here is an example how i use VBA to open the dialog: Private Sub ShowAboutDialog_Click() Dim oComAddIn As COMAddIn Set oComAddIn = Application.COMAddIns.Item("MyComAddIn.Example") oComAddIn.Connect = True Call oComAddIn.Object.ShowAboutDlg End Sub My guess is that the problem is somewhere in the message handler of the dialog: INT_PTR CALLBACK CAboutDialog::AboutDlg( HWND hwndDlg, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch(uMsg) { ... case WM_COMMAND: if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL) { // Here, the ESCAPE key should also be trapped? EndDialog(hwndDlg, LOWORD(wParam)); return TRUE; } ... } return FALSE; } The Dialog is created with: DialogBox(g_hModule, MAKEINTRESOURCE(IDD_ABOUT), hWndParent, (DLGPROC)AboutDlg) Thanks a lot!

    Read the article

  • What objects would get defined in a Bridge scoring app (Javascript)

    - by Alex Mcp
    I'm writing a Bridge (card game) scoring application as practice in javascript, and am looking for suggestions on how to set up my objects. I'm pretty new to OO in general, and would love a survey of how and why people would structure a program to do this (the "survey" begets the CW mark. Additionally, I'll happily close this if it's out of range of typical SO discussion). The platform is going to be a web-app for webkit on the iPhone, so local storage is an option. My basic structure is like this: var Team = { player1: , player2: , vulnerable: , //this is a flag for whether or //not you've lost a game yet, affects scoring scoreAboveLine: , scoreBelowLine: gamesWon: }; var Game = { init: ,//function to get old scores and teams from DB currentBid: , score: , //function to accept whether bid was made and apply to Teams{} save: //auto-run that is called after each game to commit to DB }; So basically I'll instantiate two teams, and then run loops of game.currentBid() and game.score. Functionally the scoring is working fine, but I'm wondering if this is how someone else would choose to break down the scoring of Bridge, and if there are any oversights I've made with regards to OO-ness and how I've abstracted the game. Thanks!

    Read the article

  • Asynchronous crawling F#

    - by jlezard
    When crawling on webpages I need to be careful as to not make too many requests to the same domain, for example I want to put 1 s between requests. From what I understand it is the time between requests that is important. So to speed things up I want to use async workflows in F#, the idea being make your requests with 1 sec interval but avoid blocking things while waiting for request response. let getHtmlPrimitiveAsyncTimer (uri : System.Uri) (timer:int) = async{ let req = (WebRequest.Create(uri)) :?> HttpWebRequest req.UserAgent<-"Mozilla" try Thread.Sleep(timer) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response") use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> return "Bad Link" } Then I do something like: let uri1 = System.Uri "http://rue89.com" let timer = 1000 let jobs = [|for i in 1..10 -> getHtmlPrimitiveAsyncTimer uri1 timer|] jobs |> Array.mapi(fun i job -> Console.WriteLine("Starting job "+string i) Async.StartAsTask(job).Result) Is this alright ? I am very unsure about 2 things: -Does the Thread.Sleep thing work for delaying the request ? -Is using StartTask a problem ? I am a beginner (as you may have noticed) in F# (coding in general actually ), and everything envolving Threads scares me :) Thanks !!

    Read the article

  • svn import, dont modify revision OR modify the list of files in a transaction

    - by Vaughan Durno
    Hi Ive gained so much knowledge/insight from this site in the past few years, now im actually hoping to get some enlightenment. The scenario is as follows: You have the general structure of the repo (trunk,branches,tags) but added to the layout you have another directory called 'db_revs'. Now in the pre-commit, you take a dump of a specific database (the specifics are irrelevant) into a temporary file, say /tmp/REV.sql (REV being the HEAD revision number of the repo, or the transaction). K all is well and you can just import that temp file into the repo at /db_revs/REV.sql Now obviously that import, even tho its happening during a commit, increments the revision of the repo. So when u do a commit at some point to say 'test.php' in the trunk and it completes at say revision 159, then the pre-commit runs as it should and the DB dump gets imported but then u r sitting with a tree in the repo-browser where 'trunk' is at revision 159, and 'db_revs', which has the imported dump, is at 158 (Ive made it so that the filename matches the revision ie: 159.sql but that file is then at revision 158). NB If you're doing an import in a pre-commit, you need to add some logic to not perform the import, say by checking first for the existence of the temp file, otherwise it will cause, um, a stack overflow and your PC will quickly crawl to a stand still So I wanted to know if it was possible to make an import to not commit its changes. I realise I might be barking up the wrong tree to begin with so I have another idea of doing this so that brings me to the 2nd part of my question, would it be possible to modify the list of files that the transaction is about to commit to the repo. I know this can be done to a WC but that wont help as a WC is a checked out copy of say the trunk so im not sure how u would add a file to the 'db_revs' folder which is above trunk? Any help is greatly appreciated Cheers Vaughan

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • LaTeX printing only first two pages of a document

    - by Peter Flom
    I am working in LaTeX, and when I create a pdf file (using LaTeX button or pdfLaTeX button or using yap) the pdf has only the first two pages. No errors. It just stops. If I make the first page longer by adding text, it still stops at end of 2nd page. Any ideas? OK, responding to first comment, here is the code \documentclass{article} \title{Outline of Book} \author{Peter L. Flom} \begin{document} \maketitle \section*{Preface} \subsection*{Audience} \subsection*{What makes this book different?} \subsection*{Necessary background} \subsection*{How to read this book} \section{Introduction} \subsection{The purpose of logistic regression} \subsection{The need for logistic regression} \subsection{Types of logistic regression} \section{General issues in logistic regression} \subsection{Transforming independent and dependent variables} \subsection{Interactions} \subsection{Model selection} \subsection{Parameter estimates, confidence intervals, p values} \subsection{Summary and further reading} \section{Dichotomous logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Ordinal logistic regression} \subsection{Introduction, theory, examples} \subsubsection{Introduction - what are ordinal variables?} \subsubsection{Theory of the model} \subsubsection{Examples for this chapter} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Multinomial logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Choosing a model} \subsection{NOIR and its problems} \subsection{Linear vs. ordinal} \subsection{Ordinal vs. multinomial} \subsection{Summary and further reading} \subsection{Exercises} \section{Extensions and related models} \subsection{Other logistic models} \subsection{Multilevel models - PROC NLMIXED and GLIMMIX} \subsection{Loglinear models - PROC CATMOD} \section{Summary} \end{document} thanks Peter

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >