Search Results

Search found 5578 results on 224 pages for 'global asax'.

Page 178/224 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Trouble binding WPF Menu to ItemsSource

    - by chaiguy
    I would like to avoid having to build a menu manually in XAML or code, by binding to a list of ICommand-derived objects. However, I'm experiencing a bit of a problem where the resulting menu has two levels of menu-items (i.e. each MenuItem is contained in a MenuItem): My guess is that this is happening because WPF is automatically generating a MenuItem for my binding, but the "viewer" I'm using actually already is a MenuItem (it's derived from MenuItem): <ContextMenu x:Name="selectionContextMenu" ItemsSource="{Binding Source={x:Static OrangeNote:Note.MultiCommands}}" ItemContainerStyleSelector="{StaticResource separatorStyleSelector}"> <ContextMenu.ItemTemplate> <DataTemplate> <Viewers:NoteCommandMenuItemViewer CommandParameter="{Binding Source={x:Static OrangeNote:App.Screen}, Path=SelectedNotes}" /> </DataTemplate> </ContextMenu.ItemTemplate> </ContextMenu> (The ItemContainerStyleSelector is from http://bea.stollnitz.com/blog/?p=23, which allows me to have Separator elements inside my bound source.) So, the menu is bound to a collection of ICommands, and each item's CommandParameter is set to the same global target (which happens to be a collection, but that's not important). My question is, is there any way I can bind this such that WPF doesn't automatically wrap each item in a MenuItem?

    Read the article

  • Cast to delegate type fails in JScript.NET

    - by dnewcome
    I am trying to do async IO using BeginRead() in JScript.NET, but I can't get the callback function to work correctly. Here is the code: function readFileAsync() { var fs : FileStream = new FileStream( 'test.txt', FileMode.Open, FileAccess.Read ); var result : IAsyncResult = fs.BeginRead( new byte[8], 0, 8, readFileCallback ), fs ); Thread.Sleep( Timeout.Infinite ); } var readFileCallback = function( result : IAsyncResult ) : void { print( 'ListenerCallback():' ); } The exception is a cast failure: Unhandled Exception: System.InvalidCastException: Unable to cast object of type 'Microsoft.JScript.Closure' to type 'System.AsyncCallback'. at JScript 0.readFileAsync(Object this, VsaEngine vsa Engine) at JScript 0.Global Code() at JScript Main.Main(String[] ) I have tried doing an explicit cast both to AsyncCallback and to the base MulticastDelegate and Delegate types to no avail. Delegates are supposed to be created automatically, obviating the need for creating a new AsyncCallback explicitly, eg: BeginRead( ... new AsyncDelegate( readFileCallback), object ); And in fact if you try to create the delegate explicitly the compiler issues an error. I must be missing something here.

    Read the article

  • Should a Perl constructor return an undef or a "invalid" object?

    - by DVK
    Question: What is considered to be "Best practice" - and why - of handling errors in a constructor?. "Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach. As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion): Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling. Pros: Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call. Allows for additional info to be passed (e.g. 1 error, warnings, etc...) Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design. Cons: None that I'm aware of. Return "undef". Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects). Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question. UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.

    Read the article

  • Shared Server Dreamhost

    - by Jseb
    I am trying to install an app on a shared server. If i understand properly because i am using a shared server, and that Dreamhost doesn't suppose rails 3.2.8 I must use FCGI, although i am not sure how to install and to make it run properly. From this tutorial http://wiki.dreamhost.com/Rails_3. To my understand here what I did, In dreamhost, activate PHP 5.x.x FastCGI and made sure Phusion Passenger is unchecked Create an app on my localmachine Because rails doesn't create a dispatch and access file i create the two following file in my /public folder dispatch.fcgi #!/home/username/.rvm/rubies/ruby-1.9.3-p327/bin/ruby ENV['RAILS_ENV'] ||= 'production' ENV['HOME'] ||= `echo ~`.strip ENV['GEM_HOME'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327') ENV['GEM_PATH'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327') + ":" + File.expand_path('~/.rvm/gems/ruby 1.9.3-p327@global') require 'fcgi' require File.join(File.dirname(__FILE__), '../config/environment') class Rack::PathInfoRewriter def initialize(app) @app = app end def call(env) env.delete('SCRIPT_NAME') parts = env['REQUEST_URI'].split('?') env['PATH_INFO'] = parts[0] env['QUERY_STRING'] = parts[1].to_s @app.call(env) end end Then created the file .htaccess <IfModule mod_fastcgi.c> AddHandler fastcgi-script .fcgi </IfModule> <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi </IfModule> Options +FollowSymLinks +ExecCGI RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ dispatch.fcgi/$1 [QSA,L] ErrorDocument 500 "Rails application failed to start properly" Uploaded to a folder and pointed to the public folder in dreamhost Made sure dispatch.fcgi has 777 for write ssh and run the following command in the public folder : ./dispatch.fcgi Crossing my finger but it doesn't work I get the following errors ./dispatch.fcgi: line 1: ENV[RAILS_ENV]: command not found ./dispatch.fcgi: line 1: =: command not found ./dispatch.fcgi: line 2: ENV[HOME]: command not found ./dispatch.fcgi: line 2: =: command not found ./dispatch.fcgi: line 3: syntax error near unexpected token (' ./dispatch.fcgi: line 3:ENV['GEM_HOME'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327')' Doing wrong??? Oh and if i go on the server i get this Rails application failed to start properly

    Read the article

  • How do you create a non-Thread-based Guice custom Scope?

    - by Russ
    It seems that all Guice's out-of-the-box Scope implementations are inherently Thread-based (or ignore Threads entirely): Scopes.SINGLETON and Scopes.NO_SCOPE ignore Threads and are the edge cases: global scope and no scope. ServletScopes.REQUEST and ServletScopes.SESSION ultimately depend on retrieving scoped objects from a ThreadLocal<Context>. The retrieved Context holds a reference to the HttpServletRequest that holds a reference to the scoped objects stored as named attributes (where name is derived from com.google.inject.Key). Class SimpleScope from the custom scope Guice wiki also provides a per-Thread implementation using a ThreadLocal<Map<Key<?>, Object>> member variable. With that preamble, my question is this: how does one go about creating a non-Thread-based Scope? It seems that something that I can use to look up a Map<Key<?>, Object> is missing, as the only things passed in to Scope.scope() are a Key<T> and a Provider<T>. Thanks in advance for your time.

    Read the article

  • Keyboard hook return different symbols from card reader depends whther my app in focus or not

    - by user363868
    I code WinForm application where one of the input is magnetic stripe card reader (CR). I am using code George Mamaladze's article Processing Global Mouse and Keyboard Hooks in C# on codeproject.com to listen keyboard (USB card reader acts same way as keyboard) and I have weird situation. One card reader CR1 (Unitech MS240-2UG) produces keystroke which I intercept on KeyPress event analyze that I intercept certain patter like %ABCD-6EFJHI? and trigger some logic. Analysis required because user can type something else into application or in another application meanwhile my app is open When I use another card reader CR2 (IdTech IDBM-334133) keystroke intercepted by hook started from number 5 instead of % (It is actually same key on keyboard). Since it is starting sentinel it is very important for me to have ability recognize input from card reader. Moreover if my app running in background and I have focus on Notepad when I swipe card string %ABCD-6EFJHI? appears in Notepad and same way, with proper starting character) intercepted by keyboard hook. If swiped when focus on Form it is 5ABCD-6EFJHI? User who tried app with another card reader has same result as me with CR2. Only CR1 works for me as expected I was looking into Device manager of Windows and both devices use same HID driver supplied by MS. I checked devices though respective software from CR makers and starting and ending sentinels set to % and ? respective on both. I would appreciate and ideas and thoughts as I hit the wall myself Thank you

    Read the article

  • jquery .live() event interactions

    - by ddango
    Let's say I have a scenario where I have a global plugin (or at least a plugin that binds to a wider array of events). This plugin takes a selector, and binds a live click to it. Something in pseudo-jquery that might look like this: $.fn.changeSomething = function(){ $(this).live("change", function(){ alert("yo");}); } On another page, I have an additional live binding something like this: $("input[type='checkbox']").live("click", function(){alert("ho");}); Within this scenario, the checkbox would ideally end up being bound to both live events. What I'm seeing is that the change event fires as it should, and I'm alerted "yo". However, using this live click event, I never trigger it. However, using an explicit click binding, I DO hit it. The easy workaround is to trigger a click event at the end of the live change handler, but this seems janky to me. Any ideas? Note that this is using jquery 1.4.2 and only occurs in IE8 (I supposed 6/7 would too, but I haven't tested them).

    Read the article

  • PHP, We have sessions, and cookies....I love cookies, but they are blowing my mind right now.

    - by Matt
    I am not sure how to go about accessing the variable I need to set on a cookie... I was thinking about using the $_POST global but I dont know how based on my design if it will work. I am using a master page type design seperating index.php from my function includes and database information and individual pages (that will be returned to an include in index.php based on a $_GET) Okay so back to my question. What is the most efficient way to set a cookie on a design that has a main page that everything will branch from. How would I pull the value. Is $_POST a good enough way to go about it? Also...by saying it must be the first thing sent...does that mean I cannot run any serverside scripts before that? I could definately utilize a login query I think but I dont want to write code just to be dissapointed based on my lack of time and knowledge. I did search for answers...I know this most likely feels like a generic question that could be answered in a difference place...but I know I will get an accurate and professional answer here...so I dont want to bet on the half answers I found otherwise. Of course I will sanitize everything and not store any sensitive information (passwords,address,phone,or anything really for that matter besides some kind of session ID and the username) If this is confusing I am sorry but I am on a gov computer...and they lock these tighter than ft knox...so getting my code on here will be a chore until I get back to my room. Thanks, Matt

    Read the article

  • How Does a COM Program Locate a .NET DLL Registered for COM Interop?

    - by Eric J.
    One customer wants to consume our .NET DLLs from VB6. They are designed to support reverse interop and all works fine... except: There are two separate VB6 programs in two different directories. It seems it's necessary to do one of: Copy the .NET DLL into both directories, or Install the .NET DLL in the GAC This is the customer's observation and also supported by the RegAsm documentation: After registering an assembly using Regasm.exe, you can install it in the global assembly cache so that it can be activated from any COM client. If the assembly is only going to be activated by a single application, you can place it in that application's directory. I'm confused on this point. First point of confusion: As far as I understand, the COM runtime locates the DLL using the Prog ID / Class ID. When I look in the registry at the Class ID entry, I see the full path to the .NET DLL in the CodeBase key. Why is it that a COM program using the Prog ID / Class ID doesn't locate the .NET DLL using the CodeBase? Second point of confusion: The GAC is specific to .NET. How is it involved in resolving COM references?

    Read the article

  • Accessing the selected element inside a templated textblock bound to a wpf listbox

    - by black sensei
    Hello good people , i'm trying to achieve a functionality but i'm don't know how to start it. I'm using vs 2008 sp1 and i'm consuming a webservice which returns a collection (is contactInfo[]) that i bind to a ListBox with little datatemplate on it. <ListBox Margin="-146,-124,-143,-118.808" Name="contactListBox" MaxHeight="240" MaxWidth="300" MinHeight="240" MinWidth="300"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock> <CheckBox Name="contactsCheck" Uid="{Binding fullName}" Checked="contacts_Checked" /><Label Content="{Binding fullName}" FontSize="15" FontWeight="Bold"/> <LineBreak/> <Label Content="{Binding mobile}" FontSize="10" FontStyle="Italic" Foreground="DimGray" /> <Label Content="{Binding email}" FontStyle="Italic" FontSize="10" Foreground="DimGray"/> </TextBlock> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Every works fine so far. so When a checkbox is checked i'll like to access the information of the labels (either the) belonging to the same row or attached to it and append the information to a global variable for example (for each checkbox checked). My problem right now is that i don't know how to do that. Can any one shed some light on how to do that? if you notice Checked="contacts_Checked" that's where i planned to perform the operations. thanks for reading and helping out

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • Mode specific key bindings

    - by rejeep
    Hey, I have a minor mode that also comes with a global mode. The mode have some key bindings and I want the user to have the posibility to specify what bindings should work for each mode. (my-minor-mode-bindings-for-mode 'some-mode '(key1 key2 ...)) (my-minor-mode-bindings-for-mode 'some-other-mode '(key3 key4 ...)) So I need some kind of mode/buffer-local key map. Buffer local is a bit problematic since the user can change the major mode. I have tried some solutions of which neither works any good. Bind all possible keys always and when the user types the key, check if the key should be active in that mode. Execute action if true, otherwise fall back. Like the previous case only that no keys are bound. Instead I use a pre command hook and check if the key pressed should do anything. For each buffer update (whatever that means), run a function that first clears the key map and then updates it with the bindings for that particular mode. I have tried these approaches and I found problems with all of them. Do you know of any good way to solve this? Thanks!

    Read the article

  • iPhone accelerometer:didAccelerate: seems to not get called while I am running a loop

    - by AJ
    An accelerometer related question. (Sorry the formatting may not look right, its the first time I am using this site). I got the accelerometer working as expected using the standard code UIAccelerometer *accel = [UIAccelerometer sharedAccelerometer]; accel.delegate = self; accel.updateInterval = 0.1; //I also tried other update values I use NSLog to log every time the accelerometer:didAccelerate: method in my class is called. The function gets called as expected and everything works fine till here. However, when I run a loop, the above method doesn't seem to get called. Something like this float firstAccelValue = globalAccel; //this is the x-accel value (stored in a global by the above method) float nextAccelValue = firstAccelValue; while (nextAccelValue == firstAccelValue){ //do something nextAccelValue = globalAccel; // note globalAccel is updated by the accelerometer method } The above loop never exits, expectedly since the accelerometer:didAccelerate: method is not getting called, and hence globalAccel never changes value. If I use a fixed condition to break the while loop, I can see that after the loop ends, the method calls work fine again. Am I missing something obvious here? Or does the accelerometer method not fire when certain processing is being done? Any help would be greatly appreciated! Thanks!

    Read the article

  • Compilation error while compiling an existing code base

    - by brijesh
    Hi, While building an existing code base on Mac OS using its native build setup I am getting some basic strange error while compilation phase. /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/bits/locale_facets.h: In constructor 'std::collate_byname<_CharT::collate_byname(const char*, size_t)': /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/bits/locale_facets.h:1072: error: '_M_c_locale_collate' was not declared in this scope /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/ppc-darwin/bits/messages_members.h: In constructor 'std::messages_byname<_CharT::messages_byname(const char*, size_t)': /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/ppc-darwin/bits/messages_members.h:79: error: '_M_c_locale_messages' was not declared in this scope /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits: At global scope: /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: 'float __builtin_huge_valf()' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: 'float __builtin_huge_valf()' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: 'float __builtin_nanf(const char*)' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: 'float __builtin_nanf(const char*)' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:900: error: field initializer is not constant /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:915: error: field initializer is not constant

    Read the article

  • How do I intiailize the vector I have defined in my header file?

    - by FrankTheTank
    I have the following in my Puzzle.h class Puzzle { private: vector<int> puzzle; public: Puzzle() : puzzle (16) {} bool isSolved(); void shuffle(vector<int>& ); }; and then my Puzzle.cpp looks like: Puzzle::Puzzle() { // Initialize the puzzle (0,1,2,3,...,14,15) for(int i = 0; i <= puzzle.size(); i++) { puzzle[i] = i; } } // ... other methods Am I using the initiailizer list wrong in my header file? I would like to define a vector of ints and initialize its size to that of 16. How should I do this? G++ Output: Puzzle.cpp:16: error: expected unqualified-id before ')' token Puzzle.cpp: In constructor `Puzzle::Puzzle()': Puzzle.cpp:16: error: expected `)' at end of input Puzzle.cpp:16: error: expected `{' at end of input Puzzle.cpp: At global scope: Puzzle.cpp:24: error: redefinition of `Puzzle::Puzzle()' Puzzle.cpp:16: error: `Puzzle::Puzzle()' previously defined here

    Read the article

  • Best Practice: QT4 QList<Mything*>... on Heap, or QList<Mything> using reference?

    - by Mike Crowe
    Hi Folks, Learning C++, so be gentle :)... I have been designing my application primarily using heap variables (coming from C), so I've designed structures like this: QList<Criteria*> _Criteria; // ... Criteria *c = new Criteria(....); _Criteria.append(c); All through my program, I'm passing pointers to specific Criteria, or often the list. So, I have a function declared like this: QList<Criteria*> Decision::addCriteria(int row,QString cname,QString ctype); Criteria * Decision::getCriteria(int row,int col) which inserts a Criteria into a list, and returns the list so my GUI can display it. I'm wondering if I should have used references, somehow. Since I'm always wanting that exact Criteria back, should I have done: QList<Criteria> _Criteria; // .... Criteria c(....); _Criteria.append(c); ... QList<Criteria>& Decision::addCriteria(int row,QString cname,QString ctype); Criteria& Decision::getCriteria(int row,int col) (not sure if the latter line is syntactically correct yet, but you get the drift). All these items are specific, quasi-global items that are the core of my program. So, the question is this: I can certainly allocate/free all my memory w/o an issue in the method I'm using now, but is there are more C++ way? Would references have been a better choice (it's not too late to change on my side). TIA Mike

    Read the article

  • When and how should independent hierarchies be used in clojure?

    - by Rob Lachlan
    Clojure's system for creating an ad hoc hierarchy of keywords is familiar to most people who have spent a bit of time with the language. For example, most demos and presentations of the language include examples such as (derive ::child ::parent) and they go on to show how this can be used for multi-method dispatch. In all of the slides and presentations that I've seen, they use the global hierarchy. But it is possible to put keyword relationships in independent hierarchies, by using (derive h ::child ::parent), where h is created by (make-hierarchy). Some questions, therefore: Are there any guidelines on when this is useful or necessary? Are there any functions for manipulating hierarchies? Merging is particularly useful, so I do this: (defn merge-h [& hierarchies] (apply merge-with (cons #(merge-with clojure.set/union %1 %2) hierarchies)) But I was wondering if such functions already exist somewhere. EDIT: Changed "custom" hierarchy to "independent" hierarchy, since that term better describes this animal. Also, I've done some research and included my own answer below. Further comments are welcome.

    Read the article

  • Flex - Flash Builder Design View not showing embedded fonts correctly

    - by Crusader
    Tools: Air 2, Flex SDK 4.1, Flash Builder, Halo component set (NO Spark being used at all) Fonts work perfectly when running the appliction, but not in design view. This effectively makes design view WORTHLESS because it's impossible to properly position components or labels without knowing what the true size is (it changes depending on font...) ... CSS included in root component like so: <fx:Style source="style.css"/> CSS file: /* CSS file */ @namespace mx "library://ns.adobe.com/flex/mx"; global { font-family:Segoe; font-size:14; color:#FFFFFF; } mx|Application, mx|VBox, mx|HBox, mx|Canvas { font-family:Segoe; background-color:#660000; border-color:#222277; color:#FFFFFF; } mx|Button { font-family:Segoe; fill-colors:#660000, #660000, #660000, #660000; color:#FFFFFF; } .... Interestingly (or buggily?), when I try pasting the style tag into a subcomponent (lower than the top level container), I get a bunch of warnings in the subcomponent editor view stating that CSS type selectors are not supported in.. (all the components in the style sheet). Yet, they do work when the application is executed. Huh? This is how I'm embedding the fonts in the root level container: [Embed(source="/assets/segoepr.ttf", fontName="Segoe", mimeType="application/x-font-truetype", embedAsCFF='false')] public static const font:Class; [Embed(source="/assets/segoeprb.ttf", fontName="Segoe", mimeType="application/x-font-truetype", fontWeight="bold", embedAsCFF='false')] public static const font2:Class; So, is there a way to get embedded fonts to work in design view or what?

    Read the article

  • Process Close does not close a process when performed in a SetTimer label

    - by NbdNnm
    The script below creates a child process to perform FileExist() separately from the main script so that it avoids halting the script when an unknown network path is passed. However, the command, Process close, does not close the given process despite its ErrorLevel indicates it succeeded to close. Task Manager shows the child process still exists. FileExistTimeout() ; this checks if the necessary parameters are passed ; this checks if the given path exists by creating a child process with the given timeout PathToCheck := "\\1.2.3.4\test" msgbox, 64, Test Result, % FileExistTimeout(PathToCheck) "`n" ErrorLevel FileExistTimeout(strPath="", nTimeout=1000, strSwitch="/fe") { global strParam1 = %1% strParam2 = %2% ; if the function is used to check the script parameters if (strPath="") { if (strParam2 = strSwitch) && strParam1 ; this means the function is placed at the beginning of the script. ExitApp % (FileExist(strParam1) != "") + 1 ; Return 1 or 2. Return } ; create a child process to check the path tooltip, RunWait is performing.... SetTimer, FileExistTimeout, % -1 * nTimeout RunWait "%A_AhkPath%" "%A_ScriptFullPath%" "%strPath%" "%strSwitch%",,, nPID SetTimer, FileExistTimeout, Off ; Disable the timer (if it hasn't fired already). tooltip ; ErrorLevel contains the exit code of the process (from RunWait). if ErrorLevel = 2 ; found return true else if ErrorLevel = 1 ; not found return return ; timeout FileExistTimeout: Process Close, %nPID% ; Timed out - terminate the process. tooltip, % "Process Closed: " ErrorLevel "`n" "Used Pid: " nPID,,, 2 return }

    Read the article

  • How can I display a language according to the user's browser's language inside this code?

    - by janoChen
    How can I display a language according to the user's browser's language inside this mini-framework for my multilingual website? Basically, it has to display the default language of the user if there's no cookies. Example of index.php: (rendered output) <h2><?php echo l('tagline_h2'); ?></h2> common.php: (controller of which language to output) <?php session_start(); header('Cache-control: private'); // IE 6 FIX if(isSet($_GET['lang'])) { $lang = $_GET['lang']; // register the session and set the cookie $_SESSION['lang'] = $lang; setcookie("lang", $lang, time() + (3600 * 24 * 30)); } else if(isSet($_SESSION['lang'])) { $lang = $_SESSION['lang']; } else if(isSet($_COOKIE['lang'])) { $lang = $_COOKIE['lang']; } else { $lang = 'en'; } //use appropiate lang.xx.php file according to the value of the $lang switch ($lang) { case 'en': $lang_file = 'lang.en.php'; break; case 'es': $lang_file = 'lang.es.php'; break; case 'tw': $lang_file = 'lang.tw.php'; break; case 'cn': $lang_file = 'lang.cn.php'; break; default: $lang_file = 'lang.en.php'; } //translation helper function function l($translation) { global $lang; return $lang[$translation]; } include_once 'languages/'.$lang_file; ?> Example of /languages/lang.en.php: (where multilingual content is being stored) <?php $lang = array( 'tagline_h2' => '...',

    Read the article

  • IUsable: controlling resources in a better way than IDisposable

    - by Ilya Ryzhenkov
    I wish we have "Usable" pattern in C#, when code block of using construct would be passed to a function as delegate: class Usable : IUsable { public void Use(Action action) // implements IUsable { // acquire resources action(); // release resources } } and in user code: using (new Usable()) { // this code block is converted to delegate and passed to Use method above } Pros: Controlled execution, exceptions The fact of using "Usable" is visible in call stack Cons: Cost of delegate Do you think it is feasible and useful, and if it doesn't have any problems from the language point of view? Are there any pitfalls you can see? EDIT: David Schmitt proposed the following using(new Usable(delegate() { // actions here }) {} It can work in the sample scenario like that, but usually you have resource already allocated and want it to look like this: using (Repository.GlobalResource) { // actions here } Where GlobalResource (yes, I know global resources are bad) implements IUsable. You can rewrite is as short as Repository.GlobalResource.Use(() => { // actions here }); But it looks a little bit weird (and more weird if you implement interface explicitly), and this is so often case in various flavours, that I thought it deserve to be new syntactic sugar in a language.

    Read the article

  • Problem with Unit testing of ASP.NET project (NullReferenceException when running the test)

    - by Alex
    Hi, I'm trying to create a bunch of MS visual studio unit tests for my n-tiered web app but for some reason I can't run those tests and I get the following error - "Object reference not set to an instance of an object" What I'm trying to do is testing of my data access layer where I use LINQ data context class to execute a certain function and return a result,however during the debugging process I found out that all the tests fail as soon as they get to the LINQ data context class and it has something to do with the connection string but I cant figure out what is the problem. The debugging of tests fails here(the second line): public EICDataClassesDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["EICDatabaseConnectionString"].ConnectionString, mappingSource) { OnCreated(); } And my test is as follows: TestMethod()] public void OnGetCustomerIDTest() { FrontLineStaffDataAccess target = new FrontLineStaffDataAccess(); // TODO: Initialize to an appropriate value string regNo = "jonh"; // TODO: Initialize to an appropriate value int expected = 10; // TODO: Initialize to an appropriate value int actual; actual = target.OnGetCustomerID(regNo); Assert.AreEqual(expected, actual); } The method which I call from DAL is: public int OnGetCustomerID(string regNo) { using (LINQDataAccess.EICDataClassesDataContext dataContext = new LINQDataAccess.EICDataClassesDataContext()) { IEnumerable<LINQDataAccess.GetCustomerIDResult> sProcCustomerIDResult = dataContext.GetCustomerID(regNo); int customerID = sProcCustomerIDResult.First().CustomerID; return customerID; } } So basically everything fails after it reaches the 1st line of DA layer method and when it tries to instantiate the LINQ data access class... I've spent around 10 hours trying to troubleshoot the problem but no result...I would really appreciate any help! UPDATE: Finally I've fixed this!!!!:) I dont know why but for some reasons in the app.config file the connection to my database was as follows: AttachDbFilename=|DataDirectory|\EICDatabase.MDF So what I did is I just changed the path and instead of |DataDirectory| I put the actual path where my MDF file sits,i.e C:\Users\1\Documents\Visual Studio 2008\Projects\EICWebSystem\EICWebSystem\App_Data\EICDatabase.mdf After I had done that it worked out!But still it's a bit not clear what was the problem...probably incorrect path to the database?My web.config of ASP.NET project contains the |DataDirectory|\EICDatabase.MDF path though..

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • Need to search email body for regex using VBA

    - by user6620
    I am trying to write a script that I can run from a rule withing outlook. The goal of this script is to search the email body for a regex pattern, \d{8}-\d{3}\(E\d\) in this case. I want to take that pattern and see if a folder with the name of the match exists and if not create one. I am not super familiar with VBA as I write mostly in python. I have copied and pasted the following bit of code together but at this time I am unable to get the MsgBox to appear when I send an email with 20120812-001(E3) in the body. I have two different versions below. --version 2-- Sub Filter(Item As Outlook.MailItem) Dim Matches, Match Dim RegEx As New RegExp RegEx.IgnoreCase = True RegEx.Pattern = "\d{8}-\d{3}(E\d)" If RegEx.Test(Item.Body) Then MsgBox "Pattern Detected" End If End Sub --Version 1-- Sub ProcessMessage(myMail As Outlook.MailItem) Dim strID As String Dim objNS As Outlook.NameSpace Dim objMsg As Outlook.MailItem Dim objMatch As Match Dim RetStr As String Set objRegExp = New RegExp objRegExp.Pattern = "\d{8}-\d{3}\(E\d\)" objRegExp.IgnoreCase = True objRegExp.Global = True strID = myMail.EntryID Set objNS = Application.GetNamespace("MAPI") Set objMsg = objNS.GetItemFromID(strID) MsgBox objMsg.Body Set objMatch = objRegExp.Execute(objMsg.Body) MsgBox objMatch End Sub

    Read the article

  • https google urlshortener request missing body

    - by Peter
    Hi, Just trying to get the new API for the goo.gl URL shortening service working on my iPhone, following the instructions on http://code.google.com/apis/urlshortener/v1/getting_started.html I'm set up and the API is enabled etc., but when I send a request in the recommended format: POST https://www.googleapis.com/urlshortener/v1/url Content-Type: application/json {"longUrl": "http://www.google.com/"} I get an error returned. The error is exactly the one listed on that page in the errors section for if you haven't passed in a longURL param. This makes me think that I'm not setting up the body of the POST request properly. Here's the code if you have any pointers... NSString *longURLString=@"http://www.stackoverflow.com"; NSString *googlRequestString=@"https://www.googleapis.com/urlshortener/v1/url"; NSMutableURLRequest *request=[NSMutableURLRequest requestWithURL:[NSURL URLWithString:googlRequestString]]; [request setHTTPMethod:@"POST"]; [request addValue:@"application/json" forHTTPHeaderField:@"Content-Type:"]; NSString *bodyString=[NSString stringWithFormat:@"{\"longUrl\": \"%@\"}",longURLString]; [request setHTTPBody:[bodyString dataUsingEncoding:NSUTF8StringEncoding]]; NSURLResponse *theResponse; NSError *error=nil; NSData *receivedData=[NSURLConnection sendSynchronousRequest:request returningResponse:&theResponse error:&error]; NSString *receivedString=[[NSString alloc] initWithData:receivedData encoding:NSUTF8StringEncoding]; NSLog(@"Received data: %@",receivedString); [receivedString release]; The NSLog returns: Received data: { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Required", "locationType": "parameter", "location": "resource.longUrl" } ], "code": 400, "message": "Required" } } which is exactly what Google says you get if you have not passed a longUrl parameter.... My guess is I'm missing something very obvious here :-) P

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >