Search Results

Search found 5378 results on 216 pages for 'spell checking'.

Page 171/216 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Drupal Views api, add simple argument handler

    - by LanguaFlash
    Background: I have a complex search form that stores the query and it's hash in a cache. Once the cache is set, I redirect to something like /searchresults/e6c86fadc7e4b7a2d068932efc9cc358 where that big long string on the end is the md5 hash of my query. I need to make a new argument for views to know what the hash is good for. The reason for all this hastle is because my original search form is way to complex and has way to many arguments to consider putting them all into the path and expecting to do the filtering with the normal views arguments. Now for my question. I have been reading views 2 documentation but not figuring out how to accomplish this custom argument. It doesn't seem to me like this should be as hard as it seems to me like it must be. Leaving aside any knowledge of the veiws api, it would seem that all I need is a callback function that will take the argument from the path as it's only argument and return a list of node id's to filter to. Can anyone point me to a solution or give me some example code? Thanks for your help! You guys are great. PS. I am pretty sure that my design is the best I can come up with, lets don't get off my question and into cross checking my design logic if we can help it.

    Read the article

  • Zend_Form using subforms getValues() problem

    - by wiseguydigital
    Hi all, I am building a form in Zend Framework 1.9 using subforms as well as Zend_JQuery being enabled on those forms. The form itself is fine and all the error checking etc is working as normal. But the issue I am having is that when I'm trying to retrieve the values in my controller, I'm receiving just the form entry for the last subform e.g. My master form class (abbreviated for speed): Master_Form extends Zend_Form { public function init() { ZendX_JQuery::enableForm($this); $this->setAction('actioninhere') ... ->setAttrib('id', 'mainForm') $sub_one = new Form_One(); $sub_one->setDecorators(... in here I add the jQuery as per the docs); $this->addSubForm($sub_one, 'form-one'); $sub_two = new Form_Two(); $sub_two->setDecorators(... in here I add the jQuery as per the docs); $this->addSubForm($sub_two, 'form-two'); } } So that all works as it should in the display and when I submit without filling in the required values, the correct errors are returned. However, in my controller I have this: class My_Controller extends Zend_Controller_Action { public function createAction() { $request = $this->getRequest(); $form = new Master_Form(); if ($request->isPost()) { if ($form->isValid($request->getPost()) { // This is where I am having the problems print_r($form->getValues()); } } } } When I submit this and it gets past isValid(), the $form-getValues() is only returning the elements from the second subform, not the entire form.

    Read the article

  • Advice on Method overloads.

    - by Muhammad Kashif Nadeem
    Please see following methods. public static ProductsCollection GetDummyData(int? customerId, int? supplierId) { try { if (customerId != null && customerId > 0) { Filter.Add(Customres.CustomerId == customerId); } if (supplierId != null && supplierId > 0) { Filter.Add(Suppliers.SupplierId == supplierId); } ProductsCollection products = new ProductsCollection(); products.FetchData(Filter); return products; } catch { throw; } } public static ProductsCollection GetDummyData(int? customerId) { return ProductsCollection GetDummyData(customerId, (int?)null); } public static ProductsCollection GetDummyData() { return ProductsCollection GetDummyData((int?)null); } 1- Please advice how can I make overloads for both CustomerId and SupplierId because only one overload can be created with GetDummyData(int? ). Should I add another argument to mention that first argument is CustomerId or SupplierId for example GetDummyData(int?, string). OR should I use enum as 2nd argument and mention that first argument is CustoemId or SupplierId. 2- Is this condition is correct or just checking 0 is sufficient - if (customerId != null && customerId 0) 3- Using Try/catch like this is correct? 4- Passing (int?)null is correct or any other better approach. Edit: I have found some other posts like this and because I have no knowledge of Generics that is why I am facing this problem. Am I right? Following is the post. http://stackoverflow.com/questions/422625/overloaded-method-calling-overloaded-method

    Read the article

  • Can I specify the files to commit in subversion in a file rather than on the command line?

    - by René Nyffenegger
    I have renamed (with svn move) a lot of files in a subversion project. Now, I am trying to commit these on Window's cmd.exe. It seems that I hit a limit (probably by cmd.exe) in that the number of files is too long for the command line to swallow. Now, I thought and hoped that I could list the files to commit in a seperate file that I could specify with the commit command (something like svn ci --files-to-commit=renamed-files.txt -m "Renamed a lot of files" Yet, either such an option does not exist or I am unable to find this. Unfortunately, I cannot do a svn ci . as I have done other changes in the project as well. Neither can I do a svn ci *pattern-of-renamed-files* since this would only check in the added files, not the deleted ones. Before I start checking in the files with smaller chunks of files to check in (and thus increase the revision number uneccesserily without giving a hint as to the 'atomicity' of the operation) I thought I ask if this is indeed impossible to do.

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • Help making userscript work in chrome

    - by Vishal Shah
    I've written a userscript for Gmail Pimp.my.Gmail & i'd like it to be compatible with Google Chrome too. Now i have tried a couple of things, to the best of my Javascript knowledge (which is very weak) & have been successful up-to a certain extent, though im not sure if it's the right way. Here's what i tried, to make it work in Chrome: The very first thing i found is that contentWindow.document doesn't work in chrome, so i tried contentDocument, which works. BUT i noticed one thing, checking the console messages in Firefox and Chrome, i saw that the script gets executed multiple times in Firefox whereas in Chrome it just executes once! So i had to abandon the window.addEventListener('load', init, false); line and replace it with window.setTimeout(init, 5000); and i'm not sure if this is a good idea. The other thing i tried is keeping the window.addEventListener('load', init, false); line and using window.setTimeout(init, 1000); inside init() in case the canvasframe is not found. So please do lemme know what would be the best way to make this script cross-browser compatible. Oh and im all ears for making this script better/efficient code wise (which is sure there is)

    Read the article

  • .NET 4.0 Implementing OutputCacheProvider

    - by azamsharp
    I am checking out the OutputCacheProvider in ASP.NET 4.0 and using it to store my output cache into the MongoDb database. I am not able to understand the purpose of Add method which is one of the override methods for OutputCacheProvider. The Add method is invoked when you have VaryByParam set to something. So, if I have VaryByParam = "id" then the Add method will be invoked. But after the Add the Set is also invoked and I can insert into the MongoDb database inside the Set method. public override void Set(string key, object entry, DateTime utcExpiry) { // if there is something in the query and use the path and query to generate the key var url = HttpContext.Current.Request.Url; if (!String.IsNullOrEmpty(url.Query)) { key = url.PathAndQuery; } Debug.WriteLine("Set(" + key + "," + entry + "," + utcExpiry + ")"); _service.Set(new CacheItem() { Key = MD5(key), Item = entry, Expires = utcExpiry }); } Inside the Set method I use the PathAndQuery to get the params of the QueryString and then do a MD5 on the key and save it into the MongoDb database. It seems like the Add method will be useful if I am doing something like VaryByParam = "custom" or something. Can anyone shed some light on the Add method of OutputCacheProvider?

    Read the article

  • why doesn't winmain set the errorlevel?

    - by Brian R. Bondy
    int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { MessageBox(NULL, _T("This should return 90 no?"), _T("OK"), MB_OK); return 90; } Why does the above program correctly display the message box, but does not set the error level? I compile this program to the name a.exe. Then from command prompt I type: c:\> a.exe (message box is displayed, I press ok) c:\> echo %ERRORLEVEL% 0 I get the same results if I do exit(90); right before the return. It still says 0. I also tried to start the program via CreateProcess and obtain the result with GetExitCodeProcess but it also returns 0 to me. I did error checking to ensure it was all started correctly. I originally seen this problem in a more complex program. But made this simple program to verify the problem. Results are the same, both programs that have WinMain return 0 always. I tried both x64, x86 and unicode and MBCS compiling options. All give 0 as an error level/status code.

    Read the article

  • What is the best way to use Guice and JMock together?

    - by Yishai
    I have started using Guice to do some dependency injection on a project, primarily because I need to inject mocks (using JMock currently) a layer away from the unit test, which makes manual injection very awkward. My question is what is the best approach for introducing a mock? What I currently have is to make a new module in the unit test that satisfies the dependencies and bind them with a provider that looks like this: public class JMockProvider<T> implements Provider<T> { private T mock; public JMockProvider(T mock) { this.mock = mock; } public T get() { return mock; } } Passing the mock in the constructor, so a JMock setup might look like this: final CommunicationQueue queue = context.mock(CommunicationQueue.class); final TransactionRollBack trans = context.mock(TransactionRollBack.class); Injector injector = Guice.createInjector(new AbstractModule() { @Override protected void configure() { bind(CommunicationQueue.class).toProvider(new JMockProvider<QuickBooksCommunicationQueue>(queue)); bind(TransactionRollBack.class).toProvider(new JMockProvider<TransactionRollBack>(trans)); } }); context.checking(new Expectations() {{ oneOf(queue).retrieve(with(any(int.class))); will(returnValue(null)); never(trans); }}); injector.getInstance(RunResponse.class).processResponseImpl(-1); Is there a better way? I know that AtUnit attempts to address this problem, although I'm missing how it auto-magically injects a mock that was created locally like the above, but I'm looking for either a compelling reason why AtUnit is the right answer here (other than its ability to change DI and mocking frameworks around without changing tests) or if there is a better solution to doing it by hand.

    Read the article

  • Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

    - by Alex Czarto
    Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery). We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur. I thought that (in older browsers anyway) double clicking a submit button worked as follows: User double clicks submit button. Browser sends a post on the first click On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response). Browser waits for second post to return, ignoring initial post response. I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response). From our testing (FireFox 3.0, IE 8.0) this is what actually happens: User double clicks submit button Browser sends a post for the first click Browser queues up second click, but waits for the response from the first click. Response returns from first click (response is ignored?). Browser sends a post for the second click. So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request wich it executes and responds to. My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server? It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts. Thanks in advance for any feedback / comments. Alex

    Read the article

  • Registry remotley hacked win 7 need help tracking the perp

    - by user577229
    I was writing some .VBS code at thhe office that would allow certain file extensions to be downloaded without a warning dialog on a w7x32 system. The system I was writing this on is in a lab on a segmented subnet. All web access is via a proxy server. The only means of accessing my machine is via the internet or from within the labs MSFT AD domain. While writing and testing my code I found a message of sorts. Upon refresing the registry to verify my code changed a dword, instead the message HELLO was written and visible in regedit where the dword value wass called for. I took a screen shot and proceeded to edit my code. This same weird behavior occurred last time I was writing registry code except on another internal server. I understand that remote registry access exists for windows systems. I will block this immediately once I return to the office. What I want to know is, can I trace who made this connection? How would I do this? I suspect the cause of this is the cause of other "odd" behaviors I'm experiencing at work such as losing control of my input director master control for over an hour and unchanged code that all of a sudden fails for no logical region. These failures occur at funny times, whenver I'm about to give a demonstration of my test code. I know this sounds crazy however knowledge of the registry component makes this believable. Once the registry can be accessed, the entire system is compromised. Any help or sanity checking is appreciated.

    Read the article

  • Can't install do_mysql gem?

    - by maccy1
    I'm trying to install the do_mysql on my Snow Leopord system Macbook Pro 13", but I keep getting this error: n216-160:~ myself$ sudo gem1.9 install do_mysql Password: Building native extensions. This could take a while... ERROR: Error installing do_mysql: ERROR: Failed to build gem native extension. /opt/local/bin/ruby1.9 extconf.rb checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/opt/local/bin/ruby1.9 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0 for inspection. Results logged to /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0/ext/do_mysql_ext/gem_make.out n216-160:~ myself$ I have no idea why. I also reinstalled my verison of MySQL with the MySQL 5.4.3 beta, 64-bit as others suggested but no dice. Does anyone have any idea what is wrong?

    Read the article

  • How do I use the sed command to remove all lines between 2 phrases (including the phrases themselves

    - by fzkl
    I am generating a log from which I want to remove X startup output which looks like this: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.31-607-imx51 armv7l Ubuntu Current Operating System: Linux nvidia 2.6.33.2 #1 SMP PREEMPT Mon May 31 21:38:29 PDT 2010 armv7l Kernel command line: mem=448M@0M nvmem=64M@448M mem=512M@512M chipuid=097c81c6425f70d7 vmalloc=320M video=tegrafb console=ttyS0,57600n8 usbcore.old_scheme_first=1 tegraboot=nand root=/dev/nfs ip=:::::usb0:on rw tegra_ehci_probe_delay=5000 smp dvfs tegrapart=recovery:1b80:a00:800,boot:2680:1000:800,environment:3780:40:800,system:38c0:2bc00:800,cache:2f5c0:4000:800,userdata:336c0:c840:800 envsector=3080 Build Date: 23 April 2010 05:19:26PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jun 16 19:52:00 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Is there any way to do this without manually checking pattern for each line?

    Read the article

  • Reading Code - helpful visualizers and browser tools

    - by wishi_
    Hi! I find myself reading 10 times more code than writing. My IDEs all are optimized to make me edit code - with completion, code assist, outlines etc. However if I'm checking out a completely new project: getting into the application's logics isn't optimized with these IDE features. Because I cannot extend what I don't fully understand. If you for example check out a relatively new project, frama-c, you realize that it has got plugins that are helpful to gain insight into "unfamiliar code": http://frama-c.com/plugins.html - However of course the project has a different scope. What I'm fully aware of. I'm looking for something that does helpful things for code-reading. Like: providing a graph, - reverse engineering UML e g., showing variable scopes showing which parts are affected by attempted modifications visualizing data-flow semantics showing tag-lists of heavily utilized functions ... My hope is that something like that exists. - That there're some Eclipse plugins I don't know or that there's a code-browser that has some of these features?

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

  • How To Get the Name of the Current Procedure/Function in Delphi (As a String)

    - by Andreas Rejbrand
    Is it possible to obtain the name of the current procedure/function as a string, within a procedure/function? I suppose there would be some "macro" that is expanded at compile-time. My scenario is this: I have a lot of procedures that are given a record and they all need to start by checking the validity of the record, and so they pass the record to a "validator procedure". The validator procedure raises an exception if the record is invalid, and I want the message of the exception to include not the name of the validator procedure, but the name of the function/procedure that called the validator procedure (naturally). That is, I have procedure ValidateStruct(const Struct: TMyStruct; const Sender: string); begin if <StructIsInvalid> then raise Exception.Create(Sender + ': Structure is invalid.'); end; and then procedure SomeProc1(const Struct: TMyStruct); begin ValidateStruct(Struct, 'SomeProc1'); ... end; ... procedure SomeProcN(const Struct: TMyStruct); begin ValidateStruct(Struct, 'SomeProcN'); ... end; It would be somewhat less error-prone if I instead could write something like procedure SomeProc1(const Struct: TMyStruct); begin ValidateStruct(Struct, {$PROCNAME}); ... end; ... procedure SomeProcN(const Struct: TMyStruct); begin ValidateStruct(Struct, {$PROCNAME}); ... end; and then each time the compiler encounters a {$PROCNAME}, it simply replaces the "macro" with the name of the current function/procedure as a string literal.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Filtering most out of XML with XSL?

    - by Gnudiff
    I need to transform a lot of XML files (Fedora export) into a different kind of XML. Trying to do it with XSL stylesheets and checking with the msxsl transformer. Supposedly I have xml file like this (assuming there are actually other nodes inside AAA, OBJ, amd all other nodes), Source.XML: <DOC> <AAA> <STUFF>example</STUFF> <OBJ> <OBJVERS id="A1" CREATED="2008-02-18T13:28:08.245Z"/> <OBJVERS id="A2" CREATED="2008-02-19T10:42:41.965Z"/> <OBJVERS id="A13" CREATED="2009-03-16T12:43:11.703Z"/> </OBJ> </AAA> <FFF/> <GGG/> <DDD> <FILE /> </DDD> </DOC> Which I need to look something like this (Target.XML): <MYOBJ> <ELEM>contents of OBJVERS with the biggest id OR creation date (whichever is easier to do) go here</ELEM> <IMAGE> contents of <FILE> node go here</IMAGE> </MYOBJ> The main problem that I have is that since I am new to XSL (and for this particular task do not have enough time to learn it properly) is that I can't understand how to tell XSL processor not to process anything else, I keep getting output from , for example. Update: basically, I solved this problem meanwhile. I will post my own answer and close the question. Update2: OK, Andrew's answer works, too, so I am just accepting it. :)

    Read the article

  • SVN checkout or export for production environment?

    - by Eran Galperin
    In a project I am working on, we have an ongoing discussion amongst the dev team - should the production environment be deployed as a checkout from the SVN repository or as an export? The development environment is obviously a checkout, since it is constantly updated. For the production, I'm personally for checking out the main trunk, since it makes future updates easier (just run svn update). However some of the devs are against it, as svn creates files with the group/owner and permissions of the svn process (this is on a linux OS, so those things matter), and also having the .svn directories on the production seem to them to be somewhat dirty. Also, if it is a checkout - how do you push individual features to the production without including in-development code? do you use tags or branch out for each feature? any alternatives? EDIT: I might not have been clear - one of the requirement is to be able to constantly be able to push fixes to the production environment. We want to avoid a complete build (which takes much longer than a simple update) just for pushing critical fixes.

    Read the article

  • how to intercept processing when Session.IsNewSession is true

    - by Cen
    I have a small 4-page application for my customers to work through. They fill out information. If they let it sit too long, and the Session timeout out, I want to pop up a javascript alert that their session has expired, and that they need to start over. At that point, then redirected to the beginning page of the application. I'm getting some strange behavior. I'm stepping through code, forcing my Sessioni.IsNewSession to be true. At this point, I write out a call to Javascript to a Literal Control placed at the bottom of the . The javascript is called, and the redirection occurs. However, what is happening is.. I am pressing a button which is more or less a "Next Page" button and triggering this code. The next page is being displayed, and then the Alert and redirection occurs. The result I was expecting was to stay on the same page I received the "Timeout", with the alert to pop-up over it, then redirection. I'm checking for Session.IsNewSession in a BaseClass for these pages, overriding the OnInit event. Any ideas why I am getting this behavior? Thanks!

    Read the article

  • Theme change doesn't work on <4.0 as it should

    - by user1717276
    I have some difficulties with setting up a "theme switcher" programmatically. I would like to switch themes from app (between White (Theme.Light.NoTitleBar) and Dark (Theme.Black.NoTitleBar)) and what I do is: I set a SharedPreference: final String PREFS_NAME = "MyPrefsFile"; final SharedPreferences settings = getSharedPreferences(PREFS_NAME, 0); final SharedPreferences.Editor editor = settings.edit(); and than I have a two buttons to switch themes (second one is almost identical) Button ThemeWhite = (Button) findViewById(R.id.ThemeWhite); ThemeWhite.setOnClickListener(new OnClickListener() { public void onClick(View v) { editor.putBoolean("Theme", false); editor.commit(); System.exit(2); } }); and in begging of each activity I check SharedPreference boolean theme = settings.getBoolean("Theme", false); if(theme){ this.setTheme(R.style.Theme_NoBarBlack); } else{ this.setTheme(R.style.Theme_NoBar); } setContentView(R.layout.aplikacja); I define themes in file styles.xml in folder values: <resources> <style name="Theme.NoBar" parent="@android:style/Theme.Light.NoTitleBar" /> <style name="Theme.NoBarBlack" parent="@android:style/Theme.NoActionBar" /> in values-v11: <resources> <style name="Theme.NoBar" parent="@android:style/Theme.Holo.Light.NoActionBar" /> <style name="Theme.NoBarBlack" parent="@android:style/Theme.Holo.NoActionBar" /> in values-v14: <resources> <style name="Theme.NoBar" parent="@android:style/Theme.DeviceDefault.Light.NoActionBar" /> <style name="Theme.NoBarBlack" parent="@android:style/Theme.DeviceDefault.NoActionBar" /> manifest file: <application android:theme="@style/Theme.NoBar" > Everything is working excellent on android 4.0 but when I use 2.2 it doesn't change theme - just font is getting white as it should be but there is no dark background. I tried checking if it at least works and changed Theme.NoBarBlack in values (for android <3.0) and its value the same as Theme.NoBar and then when I pressed button font wasn't changed -as it should do.

    Read the article

  • questions about using svn

    - by ajsie
    i have set up a svn repository in a remote ubuntu server using webdav (http://ip/svn). but i have some questions: should i create one svn repository for each project folder? so i should use "svnadmin create" 5 times for 5 projects? or should i import them to seperate folders in the svn repository? should i after i have created a svn repo, import one project folder into it with "svn import" and then DELETE my original local folder thus only having it in the svn repo in the remote ubuntu server? is this safe or should i still have a local copy for some reason (cause i wont work in that one)? i use netbeans to check out projects from svn repo, then i edit the files and commit the changes. but how should i do to update the live web server content (in /var/www)? should i in my ubuntu server use "svn checkout" and check it out to /var/www or should i use netbeans to check out to a local folder and then upload the files to /var/www with ftp or webdav (and which one of them should i use)? so i got a svn repo and lets say 4 programmers are working with it, checking out and commiting. how do i check what is happening in the repo, which one made which changes, and all the changes from day 1 to day 213? does netbeans/eclipse letting me see this kind of information or do i download another application for it? i should have one svn user for each programmer in the ubuntu server? is this accomplished by using "htpasswd" 4 times for 4 programmers? how do i couple all these users to same group so that i could modify file access specific for the svn group and all its members?

    Read the article

  • File Locked by Services (after service code reading the text file)

    - by rvpals
    I have a windows services written in C# .NET. The service is running on a internal timer, every time the interval hits, it will go and try to read this log file into a String. My issue is every time the log file is read, the service seem to lock the log file. The lock on that log file will continue until I stop the windows service. At the same time the service is checking the log file, the same log file needs to be continuously updated by another program. If the file lock is on, the other program could not update the log file. Here is the code I use to read the text log file. private string ReadtextFile(string filename) { string res = ""; try { System.IO.FileStream fs = new System.IO.FileStream(filename, System.IO.FileMode.Open, System.IO.FileAccess.Read); System.IO.StreamReader sr = new System.IO.StreamReader(fs); res = sr.ReadToEnd(); sr.Close(); fs.Close(); } catch (System.Exception ex) { HandleEx(ex); } return res; } Thank you.

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >