Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 417/1286 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Parsing some results returned by nokogiri in ruby, getting an error message

    - by Khat
    The following code returns an error: require 'nokogiri' require 'open-uri' @doc = Nokogiri::HTML(open("http://www.amt.qc.ca/train/deux-montagnes/deux-montagnes.aspx")) #@doc = Nokogiri::HTML(File.open("deux-montagnes.html")) stations = @doc.xpath("//area") stations.each { |station| str = station reg = /href="(.*)" title="(.*)"/ href = reg.match(str)[1] title = reg.match(str)[2] page = /.*\/(.*).aspx$/.match(href)[1] puts href puts title puts page base_url = "http://www.amt.qc.ca" complete_url = base_url + href puts complete_url } ERROR: station_names_from_map.rb:9:in `block in <main>': undefined method `[]' for nil:NilClass (NoMethodError) from /opt/local/lib/ruby1.9/gems/1.9.1/gems/nokogiri-1.4.1/lib/nokogiri/xml/node_set.rb:213:in `block in each' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/nokogiri-1.4.1/lib/nokogiri/xml/node_set.rb:212:in `upto' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/nokogiri-1.4.1/lib/nokogiri/xml/node_set.rb:212:in `each' from station_names_from_map.rb:7:in `<main>' shell returned 1 While this code works: str = '<area shape="poly" alt="Deux-Montagnes" coords="59,108,61,106,65,106,67,108,67,113,65,115,61,115,59,113" href="/train/deux-montagnes/deux-montagnes.aspx" title="Deux-Montagnes">' reg = /href="(.*)" title="(.*)"/ href = reg.match(str)[1] title = reg.match(str)[2] page = /.*\/(.*).aspx$/.match(href)[1] puts href puts title puts page base_url = "http://www.amt.qc.ca" complete_url = base_url + href puts complete_url Any reason why?

    Read the article

  • How to bind a table in a dataset to a WPF datagrid in C# and XAML

    - by Jim Thomas
    I have been searching to hours for something very simple: bind a WPF datagrid to a datatable in order to see the columns at design-time. I can’t get any of the examples to work for me. Here is the C# code to populate the datatable InfoWork inside the dataset info: info = new Info(); InfoTableAdapters.InfoWorkTableAdapter adapter = new InfoTableAdapters.InfoWorkTableAdapter(); adapter.Fill(info.InfoWork); The problem is no matter how I declare ‘info’ or ‘infoWork’ Visual Studio/XAML can’t find it. I have tried: <Window.Resources> <ObjectDataProvider x:Key="infoWork" ObjectType="{x:Type local:info}" /> </Window.Resources> I have also tried this example from wpf.codeplex, but XAML doesn’t even like the “local:” keyword! <Window.Resources> <local:info x:Key="infoWork"/> </Window.Resources> There are really two main questions here: 1) How do I declare the table InfoWork in C# so that XAML can see it? I tried declaring it Public in the window class that XAML exists in with no success. 2) How do I declare the windows resource in XAML, specifcally the datatable inside the dataset? Out of curiosity, is there a reason that ItemsSource just doesn't show up as a property that be set in the properties design window?

    Read the article

  • SOLR phrase query

    - by Alex
    I have a slight problem when searching with SOLR 4.0 and attempting a phrase query. I have a field called "idx_text_general_ci" which is a case insensitive (all lowercased) field made up of all fields. When I try and search for a phrase (marine fitter) my SOLR refuses to search for the phrase instead splitting the phrase into 2 words - /select?defType=edismax&q=idx_text_general_ci:marine%20fitter&debugQuery=true debugQuery=true output below: <lst name="debug"> <str name="rawquerystring">idx_text_general_ci:marine fitter</str> <str name="querystring">idx_text_general_ci:marine fitter</str> <str name="parsedquery"> (+(idx_text_general_ci:marine DisjunctionMaxQuery((id:fitter))))/no_coord </str> <str name="parsedquery_toString">+(idx_text_general_ci:marine (id:fitter))</str> As you can see above it splits the query into 2 parts (idx_text_general_ci:marine then id:fitter). THe problem I have is that I have an exact match for "marine fitter" that appears twice in the idx_text_general_ci field yet it's ranked with a lesser score than a document with the word "marine" appearing 3 times. I know this will not be the case if my SOLR was to search the field with the phrase as expected. If I wrap the phrase in quotes I get zero results. Any help or a nudge in the right direction would be much appreciated. Thanks in advance Alex

    Read the article

  • How to: Searchlogic and Tags

    - by bob
    I have installed searchlogic and added will_paginate etc. I currently have a product model that has tagging enabled using the acts_as_taggable_on plugin. I want to search the tags using searchlogic. Here is the taggable plugin page: http://github.com/mbleigh/acts-as-taggable-on Each product has a "tag_list" that i can access using Product.tag_list or i can access a specific tag using Product.tags[0] I can't find the scope to use for searching however with search logic. Here is my part of my working form. <p> <%= f.label :name_or_description_like, "Name" %><br /> <%= f.text_field :name_or_description_like %> </p> I have tried :name_or_description_or_tagged_with_like and :name_or_description_or_tags_like and also :name_or_description_or_tags_list_like to try and get it to work but I keep have an error that says the options i have tried are not found (named scopes not found). I am wondering how I can get this working or how to create my own named_scope that would allow me to search the tags added to each product by the taggable plugin. Thanks!

    Read the article

  • Using git pull to track a remote branch without merging

    - by J Barlow
    I am using git to track content which is changed by some people and shared "read-only" with others. The "readers" may from time to time need to make a change, but mostly they will not. I want to allow for the git "writers" to rebase pushed branches** if need be, and ensure that the "readers" never accidentally get a merge. That's normally easy enough. git pull origin +master There's one case that seems to cause problems. If a reader makes a local change, the command above will merge. I want pull to be fully automatic if the reader has not made local changes, while if they have made local changes, it should stop and ask for input. I want to track any upstream changes while being careful about merging downstream changes. In a way, I don't really want to pull. I want to track the master branch exactly. ** (I know this is not a best practice, but it seems necessary in our case: we have one main branch that contains most of the work and some topic branches for specific customers with minor changes that need to be isolated. It seems easiest to frequently rebase to keep the topics up to date.)

    Read the article

  • onLoad focus() event within jquerytools overlay effect

    - by tomcritchlow
    Hi, I'm using the overlay jquery from here: http://flowplayer.org/tools/overlay/index.html Within my overlay I have a search box like this: <div class="simple_overlay" id="asearch"> <div id="searchbox"> <form id="amazonsearch" style='float:left;'> <input class="title" id="amazon-terms" style="width:400px;font-size:2em;"> <button class="sexybutton sexysimple sexygreen">Search</button> </form> <div id="amazon-results"></div> </div><!--seachbox--> </div><!--Overlay--> What I want to happen is when you load the overlay the search box within the overlay gains focus so you can start typing into it. I thought that this would work: $("a[rel]").overlay({ onLoad: function() { $('#amazon-terms').focus(); } }); But that doesn't seem to do anything. I know the event is firing because this works: $("a[rel]").overlay({ onLoad: function() { alert('popup opened') } }); However, when this alert fires the overlay has not yet appeared on the screen so I wonder if that is part of the problem? According to the docs onLoad should fire "when the overlay has completely been displayed" (ref) Any help appreciated! :) Thanks Tom EDIT This code does what I want it to but I'm none the wiser as to why this works when the code above doesn't.... var triggers = $("a[rel]").overlay({ closeOnClick: false, onLoad: function() { $('input').focus(); } });

    Read the article

  • submit remote form with prototype

    - by badnaam
    I have a remote form like this and a checkbox in it. When I select or deselect the checkbox I would like to 1 - set the value of a hidden field 2 - ajax submit this form to its designated url. I tried $('search_form').onsubmit(), but I get an error saying onsubmit is not a function. Using prototype. Whats the best way to do this? <form onsubmit="new Ajax.Request('/searches/search_set?stype=1', {asynchronous:true, evalScripts:true, parameters:Form.serialize(this)}); return false;" method="post" id="new_search" class="new_search" action="/searches/search_set?stype=1"><div style="margin: 0pt; padding: 0pt; display: inline;"><input type="hidden" value="3TWSyMsZXI0nltz7zHAxuj1KX=" name="authenticity_token"></div> <a onclick="setSubmit(this);" href="#" class="submit-link-button fg-button ui-state-default fg-button-icon-left ui-corner-all" id="search_submit"><span class="ui-icon ui-icon-search"></span>'Search'</a> </div> <input type="checkbox" value="Energy" onclick="refreshResults(this);" name="search[conditions][article_tag][0]" id="search_conditions_article_tag_0">

    Read the article

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • Java: dangerous self-returning-recursive function by IOException?

    - by HH
    I had very errorsome Exception handling with if-clauses. An exception occurs if not found path. An exception returns the function again. The function is recursive. Safe? $ javac SubDirs.java $ java SubDirs Give me an path. . HELLO com TOASHEOU google common annotations base collect internal Give me an path. IT WON'T FIND ME, SO IT RETURNS ITSELF due to Exception caught Give me an path. $ cat SubDirs.java import java.io.*; import java.util.*; public class SubDirs { private List<File> getSubdirs(File file) throws IOException { List<File> subdirs = Arrays.asList(file.listFiles(new FileFilter() { public boolean accept(File f) { return f.isDirectory(); } })); subdirs = new ArrayList<File>(subdirs); List<File> deepSubdirs = new ArrayList<File>(); for(File subdir : subdirs) { deepSubdirs.addAll(getSubdirs(subdir)); } subdirs.addAll(deepSubdirs); return subdirs; } public static void search() { try{ BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); String s; System.out.println("Give me an path."); while ((s = in.readLine()) != null && s.length() != 0){ SubDirs dirs = new SubDirs(); List<File> subDirs = dirs.getSubdirs(new File(s)); for ( File f : subDirs ) { System.out.println(f.getName()); } System.out.println("Give me an path."); } }catch(Exception e){ // Simple but is it safe? search(); } } public static void main(String[] args) throws IOException { search(); } }

    Read the article

  • Javascript BBCode Parser recognizes only first list element

    - by nolandark
    I have a really simple Javascript BBCode Parser for client-side live preview (don't want to use Ajax for that). The problem ist, this parser only recognizes the first list element: function bbcode_parser(str) { search = new Array( /\[b\](.*?)\[\/b\]/, /\[i\](.*?)\[\/i\]/, /\[img\](.*?)\[\/img\]/, /\[url\="?(.*?)"?\](.*?)\[\/url\]/, /\[quote](.*?)\[\/quote\]/, /\[list\=(.*?)\](.*?)\[\/list\]/i, /\[list\]([\s\S]*?)\[\/list\]/i, /\[\*\]\s?(.*?)\n/); replace = new Array( "<strong>$1</strong>", "<em>$1</em>", "<img src=\"$1\" alt=\"An image\">", "<a href=\"$1\">$2</a>", "<blockquote>$1</blockquote>", "<ol>$2</ol>", "<ul>$1</ul>", "<li>$1</li>"); for (i = 0; i < search.length; i++) { str = str.replace(search[i], replace[i]); } return str;} [list] [*] adfasdfdf [*] asdfadsf [*] asdfadss [/list] only the first element is converted to a HTML List element, the rest stays as BBCode: adfasdfdf [*] asdfadsf [*] asdfadss I tried playing around with "\s", "\S" and "\n" but I'm mostly used to PHP Regex and totally new to Javascript Regex. Any suggestions?

    Read the article

  • JSF Managed Property question

    - by kidvid
    I have a search page that I'll called "Parent." The search page references a country lookup page that I'll call "Child." When the user selects a country on Child's page and clicks on OK, I set the country back into the parent page. I do this by calling a method on the Parent page called "UpdateCountryCodeWithLookupValue(Child child)" When the user clicks on OK on the Child page, that method gets called in the parent, wherein it'll get the selected country code out of the Child page and set it into a text entry field. My question has to do with the proper way to set up this relationship in the faces config file. The way I have it now is that the child has a managed property for the parent. I.e., in my Child page I defined a method called "SetParent(Parent parent)". Is there any drawback to doing it this way? Would it be preferable to set the managed property so that the Child page class is a property of the parent instead of vice-versa? Let's say that I could have two Parent (search) pages open at the same time, and each of these was able to open the Child page (country code lookup). What would be the ramification for that circumstance in terms of the managed property in the faces config file? Thanks, Adrian

    Read the article

  • Mimic remote API or extend existing django model

    - by drozzy
    I am in a process of designing a client for a REST-ful web-service. What is the best way to go about representing the remote resource locally in my django application? For example if the API exposes resources such as: List of Cars Car Detail Car Search Dealership summary So far I have thought of two different approaches to take: Try to wrangle the django's models.Model to mimic the native feel of it. So I could try to get some class called Car to have methods like Car.objects.all() and such. This kind of breaks down on Car Search resources. Implement a Data Access Layer class, with custom methods like: Car.get_all() Car.get(id) CarSearch.search("blah") So I will be creating some custom looking classes. Has anyone encoutered a similar problem? Perhaps working with some external API's (i.e. twitter?) Any advice is welcome. PS: Please let me know if some part of question is confusing, as I had trouble putting it in precise terms.

    Read the article

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Is it possible to output cache by host name? ie varybyhost or varbyhostheader?

    - by Pure.Krome
    Hi folks, i've got a website that has a number of host headers. Depending on the host header, the results are different - both visually (theme'd) and data. So lets imagine i have a website called 'Foo' - that returns search results (original, eh?). Now, the same code runs both sites. It is physically the same server/website (using Host Headers) :- www.foo.com www.foo.com.au Now, if i goto '.com', the site is theme'd in blue. if i goto the '.com.au' site, it's theme'd in red. And the data is different for the same search result, based on the host name (ie. us results for .com, au results for .com.au) SO .. if i wish to use OutputCaching .. can this be handled / differ by the host name? I don't want to have the first person goto the .com site .. grab the results ... and the a second person goto my .com.au .. same search data .. and get the theme and results for the .com site. Possible?

    Read the article

  • cant get connexion using MongoTor

    - by Abdelouahab Pp
    i was trying to change my code to make it asynchronous using MongoTor here is my simple code: class BaseHandler(tornado.web.RequestHandler): @property def db(self): if not hasattr(self,"_db"): _db = Database.connect('localhost:27017', 'essog') return _db @property def fs(self): if not hasattr(BaseHandler,"_fs"): _fs = gridfs.GridFS(self.db) return _fs class LoginHandler(BaseHandler): @tornado.web.asynchronous @tornado.gen.engine def post(self): email = self.get_argument("email") password = self.get_argument("pass1") try: search = yield tornado.gen.Task(self.db.users.find, {"prs.mail":email}) .... i got this error: Traceback (most recent call last): File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\web.py", line 1043, in _stack_context_handle_exception raise_exc_info((type, value, traceback)) File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\web.py", line 1162, in wrapper return method(self, *args, **kwargs) File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 122, in wrapper runner.run() File "C:\Python27\lib\site-packages\tornado-2.4.post1-py2.7.egg\tornado\gen.py", line 365, in run yielded = self.gen.send(next) File "G:\Mon projet\essog\handlers.py", line 92, in post search = yield tornado.gen.Task(self.db.users.find, {"prs.mail":email}) File "G:\Mon projet\essog\handlers.py", line 62, in db _db = Database.connect('localhost:27017', 'essog') File "build\bdist.win-amd64\egg\mongotor\database.py", line 131, in connect database.init(addresses, dbname, read_preference, **kwargs) File "build\bdist.win-amd64\egg\mongotor\database.py", line 62, in init ioloop_is_running = IOLoop.instance().running() AttributeError: 'SelectIOLoop' object has no attribute 'running' ERROR:tornado.access:500 POST /login (::1) 3.00ms and, excuse me for this other question, but how do i make distinct in this case? here is what worked in blocking mode: search = self.db.users.find({"prs.mail":email}).distinct("prs.mail")[0] Update: it seems that this error happenes when there is no Tornado running! it's the same error raised when using only the module in console. test = Database.connect("localhost:27017", "essog") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ---- 1 test = Database.connect("localhost:27017", "essog") C:\Python27\lib\site-packages\mongotor-0.0.10-py2.7.egg\mongotor\database.pyc in connect(cls, addresses, dbname, read_preference, **kwargs) 131 132 database = Database() -- 133 database.init(addresses, dbname, read_preference, **kwargs) 134 135 return database C:\Python27\lib\site-packages\mongotor-0.0.10-py2.7.egg\mongotor\database.pyc in init(self, addresses, dbname, read_preference, **kwargs) 60 self._nodes.append(node) 61 --- 62 ioloop_is_running = IOLoop.instance().running() 63 self._config_nodes(callback=partial(self._on_config_node, ioloop_is_running)) 64 AttributeError: 'SelectIOLoop' object has no attribute 'running'

    Read the article

  • How to Bind Data and manipulate it in a GridView with MVP

    - by DotNetDan
    I am new to the whole MVP thing and slowly getting my head around it all. The a problem I am having is how to stay consistent with the MVP methodology when populating GridViews (and ddls, but we will tackle that later). Is it okay to have it connected straight to an ObjectDataSourceID? To me this seems wrong because it bypasses all the separation of concerns MVP was made to do. So, with that said, how do I do it? How do I handle sorting (do I send over handler events to the presentation layer, if so how does that look in code)? Right now I have a GridView that has no sorting. Code below. ListCustomers.aspx.cs: public partial class ListCustomers : System.Web.UI.Page, IlistCustomer { protected void Page_Load(object sender, EventArgs e) { //On every page load, create a new presenter object with //constructor recieving the // page's IlistCustomer view ListUserPresenter ListUser_P = new ListUserPresenter(this); //Call the presenter's PopulateList to bind data to gridview ListUser_P.PopulateList(); } GridView IlistCustomer.UserGridView { get { return gvUsers; } set { gvUsers = value; } } } Interface ( IlistCustomer.cs): is this bad sending in an entire Gridview control? public interface IlistCustomer { GridView UserGridView { set; get; } } The Presenter (ListUserPresenter.cs): public class ListUserPresenter { private IlistCustomer view_listCustomer; private GridView gvListCustomers; private DataTable objDT; public ListUserPresenter( IlistCustomer view) { //Handle an error if an Ilistcustomer was not sent in) if (view == null) throw new ArgumentNullException("ListCustomer View cannot be blank"); //Set local IlistCustomer interface view this.view_listCustomer = view; } public void PopulateList() { //Fill local Gridview with local IlistCustomer gvListCustomers = view_listCustomer.UserGridView; // Instantiate a new CustomerBusiness object to contact database CustomerBusiness CustomerBiz = new CustomerBusiness(); //Call CustomerBusiness's GetListCustomers to fill DataTable object objDT = CustomerBiz.GetListCustomers(); //Bind DataTable to gridview; gvListCustomers.DataSource = objDT; gvListCustomers.DataBind(); } }

    Read the article

  • Thinking Sphinx with a date range

    - by Leddo
    Hi, I am implementing a full text search API for my rails apps, and so far have been having great success with Thinking Sphinx. I now want to implement a date range search, and keep getting the "bad value for range" error. Here is a snippet of the controller code, and i'm a bit stuck on what to do next. @search_options = { :page => params[:page], :per_page => params[:per_page]||50 } unless params[:since].blank? # make sure date is in specified format - YYYY-MM-DD d = nil begin d = DateTime.strptime(params[:since], '%Y-%m-%d') rescue raise ArgumentError, "Value for since parameter is not a valid date - please use format YYYY-MM-DD" end @search_options.merge!(:with => {:post_date => d..Time.now.utc}) end logger.info @search_options @posts = Post.search(params[:q], @search_options) When I have a look at the log, I am seeing this bit which seems to imply the date hasn't been converted into the same time format as the Time.now.utc. withpost_date2010-05-25T00:00:00+00:00..Tue Jun 01 17:45:13 UTC 2010 Any ideas? Basically I am trying to have the API request pass in a "since" date to see all posts after a certain date. I am specifying that the date should be in the YYYY-MM-DD format. Thanks for your help. Chris EDIT: I just changed the date parameters merge statement to this @search_options.merge!(:with = {:post_date = d.to_date..DateTime.now}) and now I get this error undefined method `to_i' for Tue, 25 May 2010:Date So obviously there is something still not setup right...

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • How to make a product catalog in C#?

    - by Ervin
    I need to develop a product catalog (about 4000 products) application, which would be given to clients on CD or DVD. The catalog exists in webpage format using PHP and MySQL. IMPORTANT: the application is given to clients who maight have old PC, old System. For minimal requirements I would put Windows XP and Internet Explorer 6 (if needed). I need the following features: 1 search option (after productID AND after keyword) 2 print option (by selecting multiple products) 3 shopping cart (making a list which will be sent to an email address if there is any Internet Connection on the computer) When I was asked to do it I had 2 days to realise a very basic version, so I took the whole website and exported it in HTML pages, and developed an application in C# which contains an embeded browser. So the whole website is now static and put on a CD. Everything fine so far. Now here are the problems: 1. the search option was realized by parsing the html files and reading the productID or looking for keywords inside of them. Put on a CD it was extremely slow (searching in 600MB of html files). FOR THIS I WOULD NEED A SOLUTION WITH A STATIC DATABASE (USING ACCESS OR SOMETHING) TO HAVE INDEXED ROWS, SO THE SEARCH COULD BE A VERY FAST ONE. 2. the printing option was a simply call of the embeded Internet Explorer print functions. Here are two problems: a) user needs IE7 for printing the website scaled (FIT TO PAGE), otherwise the edges of the page are cut down. b) users of this app does not have even the basic PC usage skills, so they can't set the printing settings, so there will appear in header and footer the page numbers and titles. QUESTION: can I set these settings from CSS for printing? 3. couldn't make a a shopping cart as I don't use a database, so I have static websites and content is inside the HTML. QUESTION: WHICH ARE THE BEST SOLUTIONS FOR THE PROBLEMS DESCRIBED ABOVE? PLEASE ANSWER EVEN IF YOUR ANSWER IS FOR ONE QUESTION ONLY. THANKS

    Read the article

  • problem with MANIFEST.MF in jar

    - by dhananjay
    I have created my jar file in the following folder: /usr/local/bin/niidle.jar And I have one jarfile which is in the following folder /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar And this file 'hector-0.6.0-17.jar' I have to include in MANIFEST.MF in jar. And when I mention class path in MANIFEST.MF as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar When I run this using command: java -jar /usr/local/bin/niidle.jar It works properly.. But I dont want to give full Class-Path name, I have to give Class-Path as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: lib/hector-0.6.0-17.jar And when I run this using command: java -jar /usr/local/bin/niidle.jar It is showing error message: Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more Please tell me solution for that...

    Read the article

  • Is Perl's flip-flop operator bugged? It has global state, how can I reset it?

    - by Evan Carroll
    I'm dismayed. Ok, so this was probably the most fun perl bug I've ever found. Even today I'm learning new stuff about perl. Essentially, the flip-flop operator .. which returns false until the left-hand-side returns true, and then true until the right-hand-side returns false keep global state (or that is what I assume.) My question is can I reset it, (perhaps this would be a good addition to perl4-esque hardly ever used reset())? Or, is there no way to use this operator safely? I also don't see this (the global context bit) documented anywhere in perldoc perlop is this a mistake? Code use feature ':5.10'; use strict; use warnings; sub search { my $arr = shift; grep { !( /start/ .. /never_exist/ ) } @$arr; } my @foo = qw/foo bar start baz end quz quz/; my @bar = qw/foo bar start baz end quz quz/; say 'first shot - foo'; say for search \@foo; say 'second shot - bar'; say for search \@bar; Spoiler $ perl test.pl first shot foo bar second shot

    Read the article

  • Check if files in a directory are still being written using Windows Batch Script

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • QUnit Unit Testing: Test Mouse Click

    - by Ngu Soon Hui
    I have the following HTML code: <div id="main"> <form Id="search-form" action="/ViewRecord/AllRecord" method="post"> <div> <fieldset> <legend>Search</legend> <p> <label for="username">Staff name</label> <input id="username" name="username" type="text" value="" /> <label for="softype"> software type</label> <input type="submit" value="Search" /> </p> </fieldset> </div> </form> </div> And the following Javascript code ( with JQuery as the library): $(function() { $("#username").click(function() { $.getJSON("ViewRecord/GetSoftwareChoice", {}, function(data) { // use data to manipulate other controls }); }); }); Now, how to test $("#username").click so that for a given input, it calls the correct url ( in this case, its ViewRecord/GetSoftwareChoice) And, the output is expected (in this case, function(data)) behaves correctly? Any idea how to do this with QUnit? Edit: I read the QUnit examples, but they seem to be dealing with a simple scenario with no AJAX interaction. And although there are ASP.NET MVC examples, but I think they are really testing the output of the server to an AJAX call, i.e., it's still testing the server response, not the AJAX response. What I want is how to test the client side response.

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >