Search Results

Search found 23346 results on 934 pages for 'clean url'.

Page 63/934 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Warning: preg_match() [function.preg-match]: Unknown modifier '/' problem

    - by SonOfOmer
    I am building custom implementation of php MVC routing engine, and I have custom routes like one in $routes array below. Each time when I send asynchronous GET request like xmlhttp.open("GET","someurl"); I get following message Warning: preg_match() [function.preg-match]: Unknown modifier '/' problem but with synchronous (normal) request it all works fine <?php $routes = array( array('url' => '/^someurl$/', 'controller' => 'somecontroller', 'view' => 'someview') ); $url = $_SERVER['REQUEST_URI']; $url = substr( $url, 1 ); $params = array(); $route_match = false; foreach($routes as $urls => $route) { if(preg_match($route['url'], $url, $matches)) { $params = array_merge($params, $matches); $route_match = true; break; } } require_once(CONTROLLER_PATH.$route['controller'].'.php'); ?> string(11) "/^someurl$/" is the result of var_dump($route['url']); Thanks.

    Read the article

  • Programmatically clean Word generated HTML while preserving styles?

    - by GeReV
    In my current company, we have this decade old... let's call it a "Hello World" application. While wanting to create a newer version of it, we also want to preserve older entries. These older entries contain hideous Word generated HTML which was never filtered before. If and when we move to a newer system, I'd generally prefer to have that HTML cleaned and filtered in order to have the site comply with HTML standards as much as possible. However, just cleaning that code like Jeff Atwood described in his blog or in any other way I know of would also ruin the style and formatting. Now, that just might cause our users to revolt and then all hell will break loose... Not a very good idea. Question is -- can Word's HTML be cleaned while preserving basic formatting? (e.g: coloring, italicized, bold text and so on) Preferably using publicly available code or library, such as HTML Tidy, examples in C# would be much appreciated. Thanks!

    Read the article

  • Jquery Cluetip - clean up between ajax loaded content

    - by ted776
    Hi, I'm using the jquery cluetip plugin and trying to figure out how to remove any open cluetip dialogs once i load new content via ajax. I am either stuck with the dialog boxes still showing on top of new content, or the ways i've tried to fix this actually remove all future cluetip dialogs from showing at all. Here's my code, thanks for any help. On dom ready i instantiate cluetip as below. //activate cluetip $('a.jTip').cluetip({ attribute: 'href', cluetipClass: 'jtip', arrows: true, activation: 'click', ajaxCache: false, dropShadow: true, sticky: true, mouseOutClose: false, closePosition: 'title' }); When i'm loading new content, I have the following code. The problem i have is that $('.cluetip-jtip').empty() prevents dialog boxes from opening on any of the new content loaded in, while the destroy function doesn't remove any open dialog boxes, but just destroys the current object. $('.next a').live("click", function(){ var toLoad = $(this).attr('href'); var $data = $('#main_body #content'); $.validationEngine.closePrompt('body'); //close any validation messages $data.fadeOut('fast', function(){ $data.load(toLoad, function(){ $data.animate({ opacity: 'show' }, 'fast'); //reinitialise datepicker and toolip $(".date").date_input(); //JT_init(); $('.hidden').hide(); //scroll to top of form $("html,body").animate({ "scrollTop": $('#content').offset().top + "px" }); //remove existing instance //$('a.jTip').cluetip('destroy'); //remove any opened popups $('.cluetip-jtip').empty(); //reinitialise cluetip $('a.jTip').cluetip({ attribute: 'href', cluetipClass: 'jtip', arrows: true, activation: 'click', ajaxCache: false, dropShadow: true, sticky: true, mouseOutClose: false, closePosition: 'title' }); }); }); return false; });

    Read the article

  • Launch ClickOnce via URL but not checking for updates

    - by Jeff Kotula
    I have a ClickOnce app that is frequently launched from another application via a URL. The URL includes some command-line arguments that load data, etc. Since the frequency of launching the app is so high, I want to cut out the check for version updates. So I implemented my own checking through the ApplicationDeployment class to avoid it. It works fine if you launch from the Start Menu once the app is installed. However, we also want to preserve the launch via URL behavior because it is advantageous in so many ways. But when launching via URL, the update check is always performed -- it seems IE isn't smart enough to look for the app in the local download area to see if it is already installed or not... Does anyone know of a way to get the "don't check for updates automatically" behavior while still using the URL launch mechanism? Actually, it looks like the issue is a Catch-22 in the ClickOnce model. If you launch with a URL, IE will always touch base with the host and check the version, updating if necessary, regardless of whether or not the app is flagged as "Don't check version". However, if you launch from the Start Menu, ClickOnce disables command-line arguments. Has anyone found any way around this, or know of a MS plan to fix it?

    Read the article

  • Spring Security 3.0 - Intercept-URL - All pages require authentication but one

    - by gav
    Hi All, I want any user to be able to submit their name to a volunteer form but only administrators to be able to view any other URL. Unfortunately I don't seem to be able to get this correct. My resources.xml are as follows; <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <http realm = "BumBumTrain Personnel list requires you to login" auto-config="true" use-expressions="true"> <http-basic/> <intercept-url pattern="/person/volunteer*" access=""/> <intercept-url pattern="/**" access="isAuthenticated()" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <user-service> <user name="admin" password="admin" authorities="ROLE_ADMIN"/> </user-service> </authentication-provider> </authentication-manager> </beans:beans> Specifically I am trying to achieve the access settings I described via; <intercept-url pattern="/person/volunteer*" access=""/> <intercept-url pattern="/**" access="isAuthenticated()" /> Could someone please describe how to use intercept-url to achieve the outcome I've described? Thanks Gav

    Read the article

  • Encrypting an id in an URL in ASP.NET MVC

    - by Chuck Conway
    I'm attempting to encode the encrypted id in the Url. Like this: http://www.calemadr.com/Membership/Welcome/9xCnCLIwzxzBuPEjqJFxC6XJdAZqQsIDqNrRUJoW6229IIeeL4eXl5n1cnYapg+N However, it either doesn't encode correctly and I get slashes '/' in the encryption or I receive and error from IIS: The request filtering module is configured to deny a request that contains a double escape sequence. I've tried different encodings, each fails: HttpUtility.HtmlEncode HttpUtility.UrlEncode HttpUtility.UrlPathEncode HttpUtility.UrlEncodeUnicode Update The problem was I when I encrypted a Guid and converted it to a base64 string it would contain unsafe url characters . Of course when I tried to navigate to a url containing unsafe characters IIS(7.5/ windows 7) would blow up. Url Encoding the base64 encrypted string would raise and error in IIS (The request filtering module is configured to deny a request that contains a double escape sequence.). I'm not sure how it detects double encoded strings but it did. After trying the above methods to encode the base64 encrypted string. I decided to remove the base64 encoding. However this leaves the encrypted text as a byte[]. I tried UrlEncoding the byte[], it's one of the overloads hanging off the httpUtility.Encode method. Again, while it was URL encoded, IIS did not like it and served up a "page not found." After digging around the net I came across a HexEncoding/Decoding class. Applying the Hex Encoding to the encrypted bytes did the trick. The output is url safe. On the other side, I haven't had any problems with decoding and decrypting the hex strings.

    Read the article

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • standard way to perform a clean shutdown with Boost.Asio

    - by Timothy003
    I'm writing a cross-platform server program in C++ using Boost.Asio. Following the HTTP Server example on this page, I'd like to handle a user termination request without using implementation-specific APIs. I've initially attempted to use the standard C signal library, but have been unable to find a design pattern suitable for Asio. The Windows example's design seems to resemble the signal library closest, but there's a race condition where the console ctrl handler could be called after the server object has been destroyed. I'm trying to avoid race conditions that cause undefined behavior as specified by the C++ standard. Is there a standard (correct) way to stop the server? So far: #include <csignal> #include <functional> #include <boost/asio.hpp> using std::signal; using boost::asio::io_service; extern "C" { static void handle_signal(int); } namespace { std::function<void ()> sighandler; } void handle_signal(int) { sighandler(); } int main() { io_service s; sighandler = std::bind(&io_service::stop, &s); auto res = signal(SIGINT, &handle_signal); // race condition? SIGINT raised before I could set ignore back if (res == SIG_IGN) signal(SIGINT, SIG_IGN); res = signal(SIGTERM, &handle_signal); // race condition? SIGTERM raised before I could set ignore back if (res == SIG_IGN) signal(SIGTERM, SIG_IGN); s.run(); // reset signals signal(SIGTERM, SIG_DFL); signal(SIGINT, SIG_DFL); // is it defined whether handle_signal can still be in execution at this // point? sighandler = nullptr; }

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

  • JDOM Parser and Namespace how to get clean Content

    - by senzacionale
    MY xml: <?xml version="1.0"?> <company xmlns="http://www.xx.com/xx"> <staff> <firstname>yong</firstname> <lastname>mook kim</lastname> <nickname>mkyong</nickname> <salary>100000</salary> </staff> <staff> <firstname>low</firstname> <lastname>yin fong</lastname> <nickname>fong fong</nickname> <salary>200000</salary> </staff> </company> Reader in = new StringReader(message); Document document = (Document)saxBuilder.build(in); Element rootNode = document.getRootElement(); List<?> list = rootNode.getChildren("staff", Namespace.getNamespace("xmlns="http://www.infonova.com/MediationFeed"")); XMLOutputter outp = new XMLOutputter(); outp.setFormat(Format.getCompactFormat()); for (int ii = 0; ii < list.size(); ii++) { Element node = (Element)list.get(ii); StringWriter sw = new StringWriter(); outp.output(node.getContent(), sw); StringBuffer sb = sw.getBuffer(); String xml = sb.toString(); } but my xml object looks like this <firstname xmlns="http://www.xx.com/xx">yong</firstname> <lastname xmlns="http://www.xx.com/xx">mook kim</lastname> <nickname xmlns="http://www.xx.com/xx">mkyong</nickname> <salary xmlns="http://www.xx.com/xx">100000</salary> every elemnt has namespace. why this? i don't want namespace... I want the same output as is in xml example like <firstname>yong</firstname> <lastname>mook kim</lastname> <nickname>mkyong</nickname> <salary>100000</salary>

    Read the article

  • Clean way to perform commands in the Emacs minibuffer

    - by Christopher Monsanto
    Consider the following example: I want to read a file using ido from the minibuffer, but merge in all of the directories I use often. I can't just execute (ido-find-file) (ido-merge-work-directories) Because the second sexp will only execute after the user is finished selecting the file. The question then is: what is the best/cleanest way to execute commands in the minibuffer's command loop? The only way I know to do this is to bind my desired command to a key sequence, and add that sequence to unread-command-events so the key runs once we enter the minibuffer command loop: (setq unread-command-events (append (listify-key-sequence (kbd "M-s")) unread-command-events)) ; std key-binding for ido-merge-work-directories (ido-find-file) But that is very hacky, and I would like to know if there is a better solution. Thanks!

    Read the article

  • Java SWT: wrapping syncExec and asyncExec to clean up code

    - by jonescb
    I have a Java Application using SWT as the toolkit, and I'm getting tired of all the ugly boiler plate code it takes to update a GUI element. Just to set a disabled button to be enabled I have to go through something like this: shell.getDisplay().asyncExec(new Runnable() { public void run() { buttonOk.setEnabled(true); } }); I prefer keeping my source code as flat as I possibly can, but I need a whopping 3 indentation levels just to do something simple. Is there some way I can wrap it? I would like a class like: public class UIUpdater { public static void updateUI(Shell shell, *function_ptr*) { shell.getDisplay().asyncExec(new Runnable() { public void run() { //Execute function_ptr } }); } } And can be used like so: UIUpdater.updateUI(shell, buttonOk.setEnabled(true)); Something like this would be great for hiding that horrible mess SWT seems to think is necessary to do anything. As I understand it, Java cannot do functions pointers. But Java 7 will have something called Closures which should be what I want. But in the meantime is there anything at all I can do to pass a function pointer or callback to another function to be executed? As an aside, I'm starting to think it'd be worth the effort to redo this application in Swing, and I don't have to put up with this ugly crap and non-cross-platformyness of SWT.

    Read the article

  • small code redundancy within while-loops (doesn't feel clean)

    - by wallacoloo
    So, in Python (though I think it can be applied to many languages), I find myself with something like this quite often: the_input = raw_input("what to print?\n") while the_input != "quit": print the_input the_input = raw_input("what to print?\n") Maybe I'm being too picky, but I don't like how the line the_input = raw_input("what to print?\n") has to get repeated. It decreases maintainability and organization. But I don't see any workarounds for avoiding the duplicate code without further decreasing the problem. In some languages, I could write something like this: while ((the_input=raw_input("what to print?\n")) != "quit") { print the_input } This is definitely not Pythonic, and Python doesn't even allow for assignment within loop conditions AFAIK. This valid code fixes the redundancy, while 1: the_input = raw_input("what to print?\n") if the_input == "quit": break print the_input But doesn't feel quite right either. The while 1 implies that this loop will run forever; I'm using a loop, but giving it a fake condition and putting the real one inside it. Am I being too picky? Is there a better way to do this? Perhaps there's some language construct designed for this that I don't know of?

    Read the article

  • Clean bindings with structs

    - by andyvn22
    I have a model class for which it makes quite a lot of sense to have NSSize and NSPoint instance variables. This is lovely. I'm trying to create an editing interface for this object. I'd like to bind to size.width and whatnot. This, of course, doesn't work. What's the cleanest, most Cocoa-y solution to this problem? Of course I could write separate accessors for the individual members of every struct I use, but it seems like there should be a better solution.

    Read the article

  • How Can I ReWrite flat link to a dynamic link and preserve the Query string?

    - by jeremysawesome
    Hello All, I am wanting to rewrite a url like: http://my.project/mydomain.com/ANY_NUMBER_OF_CATEGORIES/designer/4/designer-name/page.html to this: http://my.projects/mydomain.com/ANY_NUMBER_OF_CATEGORIES/page.html?designer=4 I would like to use mod-rewrite to accomplish this. Things to note: Any number of categories can be between 'mydomain.com/' and '/designer'. For instance the url could be http://my.project/mydomain.com/designer/4/designer-name/page.html or it could be http://my.project/mydomain.com/tops/shirts/small/designer/4/designer-name/page.html A query string may be provided in the original url that needs to be preserved in the rewritten url. For example url provided could be: http://my.project/mydomain.com/designer/4/designer-name/page.html?color=red&type=shirt Given the url above the resulting url would need to be: http://my.projects/mydomain.com/page.html?designer=4&color=red&type=shirt The order of the query string does not matter. The 'designer=4' part could come before or after the rest of the query string. I'm new to .htaccess and re-writes so any examples and or explanations would be greatly appreciated. Thank you very much.

    Read the article

  • Multiple UIWebViewNavigationTypeBackForward not only false but make inferring the actual url impossi

    - by SG1
    I have a UIWebView and a UITextField for the url. Naturally, I want the textField to always show the current document url. This works fine for urls directly input in the field, but I also have some buttons attached to the view for reload, back, and forward. So I've added all the UIWebViewDelegate methods to my controller, so it can listen to whenever the webView navigates and change the url in the textField as needed. Here's how I'm using the shouldStartLoadWithRequest: method: - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { NSLog(@"navigated via %d", navigationType); //loads the user cares about if ( navigationType == UIWebViewNavigationTypeLinkClicked || navigationType == UIWebViewNavigationTypeBackForward ) { //URL setting [self setUrlQuietly:request.URL]; } return YES; } Now, my problem here is that an actual click will generate a single navigation of type "LinkClicked" followed by a dozen type "Other" (redirects and ad loads I assume), which gets handled correctly by the code, but a back/forward action will generate all its requests as back/forward requests. In other words, a click calls setUrlQuietly: once, but a back/forward calls it multiple times. I am trying to use this method to determine if the user actually initiated the action (and I'd like to catch page redirects too). But if the method has no way of distinguishing between an actual "back" and a "load initiated as a result of a back", how can I make this assessment? Without this, I am completely stumped as to how I can only show the actual url and not intermediate urls. Thank you!

    Read the article

  • Is session destory not enough to clean the session

    - by Kamo
    When the user clicks a logout button, I connect to a script that simply does this session_destroy(); session_start(); I thought this would be enough to reset all $_SESSION variables such as $_SESSION['logged'] and $_SESSION['username'] but when I load the page again, it automatically logs me in as if the session is still active.

    Read the article

  • Problem when get pageContent of URL in java ?

    - by tiendv
    Hi all ! i have a code for get pagecontent from a URL here is code ! import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader(new InputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } The problem is : 1 Some time i receive a error lik 405 ,or 403 HTTP error ... and result string is null . To repair i check permission to connect URL but it usualy return null URLConnection openConnection = urlString.openConnection(); openConnection.getPermission( ) is mean that i don't have permission to acess link ? To get resultString without HTML Tag ? i do like that String nohtml = sb.toString().replaceAll("\<.*?",""); Para sb is Stringbulder , but it can't remove all HTML Tab in string return ? I use thread here because i must get page alot of url , so how can i cread a multi thread to impro speed of program ! Thanks

    Read the article

  • R: extracting "clean" UTF-8 text from a web page scraped with RCurl

    - by SlowLearner
    Using R, I am trying to scrape a web page save the text, which is in Japanese, to a file. Ultimately this needs to be scaled to tackle hundreds of pages on a daily basis. I already have a workable solution in Perl, but I am trying to migrate the script to R to reduce the cognitive load of switching between multiple languages. So far I am not succeeding. Related questions seem to be this one on saving csv files and this one on writing Hebrew to a HTML file. However, I haven't been successful in cobbling together a solution based on the answers there. The pages are from Yahoo! Japan Finance and my Perl code that looks like this. use strict; use HTML::Tree; use LWP::Simple; #use Encode; use utf8; binmode STDOUT, ":utf8"; my @arr_links = (); $arr_links[1] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203"; $arr_links[2] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201"; foreach my $link (@arr_links){ $link =~ s/"//gi; print("$link\n"); my $content = get($link); my $tree = HTML::Tree->new(); $tree->parse($content); my $bar = $tree->as_text; open OUTFILE, ">>:utf8", join("","c:/", substr($link, -4),"_perl.txt") || die; print OUTFILE $bar; } This Perl script produces a CSV file that looks like the screenshot below, with proper kanji and kana that can be mined and manipulated offline: My R code, such as it is, looks like the following. The R script is not an exact duplicate of the Perl solution just given, as it doesn't strip out the HTML and leave the text (this answer suggests an approach using R but it doesn't work for me in this case) and it doesn't have the loop and so on, but the intent is the same. require(RCurl) require(XML) links <- list() links[1] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203" links[2] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201" txt <- getURL(links, .encoding = "UTF-8") Encoding(txt) <- "bytes" write.table(txt, "c:/geturl_r.txt", quote = FALSE, row.names = FALSE, sep = "\t", fileEncoding = "UTF-8") This R script generates the output shown in the screenshot below. Basically rubbish. I assume that there is some combination of HTML, text and file encoding that will allow me to generate in R a similar result to that of the Perl solution but I cannot find it. The header of the HTML page I'm trying to scrape says the chartset is utf-8 and I have set the encoding in the getURL call and in the write.table function to utf-8, but this alone isn't enough. The question How can I scrape the above web page using R and save the text as CSV in "well-formed" Japanese text rather than something that looks like line noise? Edit: I have added a further screenshot to show what happens when I omit the Encoding step. I get what look like Unicode codes, but not the graphical representation of the characters. So it may be some kind of locale-related issue, but in the exact same locale the Perl script does provide useful output. So this is still puzzling.

    Read the article

  • Please help clean this loop

    - by Alex Angelini
    I do not code much in Javascript, but I have the following snippet which IMHO looks horrendous and I have to do this nested iteration quite often in my code. Does anyone have a prettier/easier to read solution? function addBrowse(data) { var list = $('<ul></ul>') for(i = 0; i < data.list.length; i++) { var file = list.append('<li class="toLeft">' + data.list[i].name + '</li>') for(j = 0; j < data.list[i].children.length; j++) { var db = file.append('<li>' + data.list[i].children[j].name + '</li>') for(k = 0; k < data.list[i].children[j].children.length; k++) db.append('<li class="toRight">' + data.list[i].children[j].children[k].name + '</li>') } } $('#browse').append(list).show()} Here is a sample data element {"file":"","db":"","tbl":"","page":"browse","list":[ { "name":"/home/alex/GoSource/test1.txt", "children":[ { "name":"go", "children":[ { "name":"validation1", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test2.txt", "children":[ { "name":"go", "children":[ { "name":"validation2", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test3.txt", "children":[ { "name":"go", "children":[ { "name":"validation3", "children":[ ] } ] } ] }]} Thanks a lot

    Read the article

  • Clean up State field with T-SQL?

    - by Pselus
    The State field in our database is a mess. There was no validation when it was filled so we have everything from two letter abbreviations to full state names to misspelled state names to "test" and "xxxx", etc. I am not going to try to handle everything, but for sure I want to fix the correct state names to abbreviations. I have a list of valid state names and abbreviations, but I don't know how I can do this: UPDATE Table SET State = ('AR','AK') WHERE (SELECT * FROM Table WHERE State IN ('Arkansas','Alaska')) Basically, can I update a field to be something from a list by the location it is in another list?

    Read the article

  • Clean way to display/hide a bunch of buttons based on a ComboBox selection

    - by John at CashCommons
    I'm writing a standalone application in VB.NET using Visual Studio 2005. I want to display/hide a bunch of Buttons based on the selected value of a ComboBox. Each selection would have a different set of Buttons to display, and I'd like to have them arranged in a nice grid. Driving a TabControl with the ComboBox value would be the kind of behavior I want, but I don't want it to look like a TabControl to the user because it might be confusing. Is there a way to do this? Basically, I'd like Selection1 of the ComboBox to show Buttons 1-4, Selection2 to show Buttons 5-11, Selection3 to show (maybe) Buttons 1, 3, 5, 6, and 8, etc., have them arranged nicely, and have the GUI show only the ComboBox and the buttons. Thanks in advance as always!

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >