Search Results

Search found 3437 results on 138 pages for 'append'.

Page 100/138 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Rails add a param to the end of a search query

    - by bob
    Hello, I am trying to implement a sort on the returned results of a search with rails. Filter by: 'recent' % | 'lowest_price' % | 'highest_price' % here are my links. I would like to add a filter to the end of my search query. So for example, I submit a search and I get back results and in my url I have http://localhost:3000/junks?search=handbag&condition=&category=&main_submit=Go! I would then like the user to be able to click one of the above links and have the page come back with a new query (i have this set up in the controller like such) con_id = params[:condition] if params[:category].nil? || params[:category].empty? cat_id = params[:category] results = Junk.search params[:search], :with => { :category_id => cat_id, :condition_id => con_id } case params[:filter] when "lowest_price" @junks = results.lowest_price.paginate :page => params[:page], :per_page => 12 when "highest_price" @junks = results.highest_price.paginate :page => params[:page], :per_page => 12 else @junks = results.paginate :page => params[:page], :per_page => 12 end else I would like the user to be able to click on of the above links and append a filter param to the end of the search query so that my controller can pick it up and call the correct database queries as seen in the case statement above. I'm guessing the url will look like this http://localhost:3000/junks?search=handbag&condition=&category=&main_submit=Go!&filter="lowest_price" How can I do this? Is there a better way?

    Read the article

  • Need help modifying C++ application to accept continuous piped input in Linux

    - by GreeenGuru
    The goal is to mine packet headers for URLs visited using tcpdump. So far, I can save a packet header to a file using: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | tee -a ./Desktop/packets.txt And I've written a program to parse through the header and extract the URL when given the following command: cat ~/Desktop/packets.txt | ./packet-parser.exe But what I want to be able to do is pipe tcpdump directly into my program, which will then log the data: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | ./packet-parser.exe Here is the script as it is. The question is: how do I need to change it to support continuous input from tcpdump? #include <boost/regex.hpp> #include <fstream> #include <cstdio> // Needed to define ios::app #include <string> #include <iostream> int main() { // Make sure to open the file in append mode std::ofstream file_out("/var/local/GreeenLogger/url.log", std::ios::app); if (not file_out) std::perror("/var/local/GreeenLogger/url.log"); else { std::string text; // Get multiple lines of input -- raw std::getline(std::cin, text, '\0'); const boost::regex pattern("GET (\\S+) HTTP.*?[\\r\\n]+Host: (\\S+)"); boost::smatch match_object; bool match = boost::regex_search(text, match_object, pattern); if(match) { std::string output; output = match_object[2] + match_object[1]; file_out << output << '\n'; std::cout << output << std::endl; } file_out.close(); } } Thank you ahead of time for the help!

    Read the article

  • Python: speed up removal of every n-th element from list.

    - by ChristopheD
    I'm trying to solve this programming riddle and althought the solution (see code below) works correct, it is too slow for succesful submission. Any pointers as how to make this run faster? (removal of every n-th element from a list)? Or suggestions for a better algorithm to calculate the same; seems I can't think of anything else then brute-force for now... Basically the task at hand is: GIVEN: L = [2,3,4,5,6,7,8,9,10,11,........] 1. Take the first remaining item in list L (in the general case 'n'). Move it to the 'lucky number list'. Then drop every 'n-th' item from the list. 2. Repeat 1 TASK: Calculate the n-th number from the 'lucky number list' ( 1 <= n <= 3000) My current code (it calculates the 3000 first lucky numbers in about a second on my machine - but unfortunately too slow): """ SPOJ Problem Set (classical) 1798. Assistance Required URL: http://www.spoj.pl/problems/ASSIST/ """ sieve = range(3, 33900, 2) luckynumbers = [2] while True: wanted_n = input() if wanted_n == 0: break while len(luckynumbers) < wanted_n: item = sieve[0] luckynumbers.append(item) items_to_delete = set(sieve[::item]) sieve = filter(lambda x: x not in items_to_delete, sieve) print luckynumbers[wanted_n-1]

    Read the article

  • force delete row on django app after migration

    - by unsorted
    After a migration with south, I ended up deleting a column. Now the current data in one of my tables is screwed up and I want to delete it, but attempts to delete just result in an error: >>> d = Degree.objects.all() >>> d.delete() Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Python26\lib\site-packages\django\db\models\query.py", line 440, in d elete for i, obj in izip(xrange(CHUNK_SIZE), del_itr): File "C:\Python26\lib\site-packages\django\db\models\query.py", line 106, in _ result_iter self._fill_cache() File "C:\Python26\lib\site-packages\django\db\models\query.py", line 760, in _ fill_cache self._result_cache.append(self._iter.next()) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 269, in i terator for row in compiler.results_iter(): File "C:\Python26\lib\site-packages\django\db\models\sql\compiler.py", line 67 2, in results_iter for rows in self.execute_sql(MULTI): File "C:\Python26\lib\site-packages\django\db\models\sql\compiler.py", line 72 7, in execute_sql cursor.execute(sql, params) File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 15, in e xecute return self.cursor.execute(sql, params) File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 200, in execute return Database.Cursor.execute(self, query, params) DatabaseError: no such column: students_degree.abbrev >>> Is there a simple way to just force a delete? Do I drop the table and then rerun manage.py schemamigration to recreate the table in south?

    Read the article

  • Unexpected Html.EditorFor behavior in ASP.NET MVC 2

    - by NickLarsen
    I am getting some unexpected behavior from Html.EditorFor(). I have this controller: [HandleError] public class HomeController : Controller { [AcceptVerbs(HttpVerbs.Get)] public ActionResult Lister() { string[] values = { "Hello", "world", "!!!" }; return View(values); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Lister(string[] values) { string[] newValues = { "Some", "other", "values" }; return View(newValues); } } And this is my view which is intended to work for both of these: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<string[]>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Lister </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Lister</h2> <% using (Html.BeginForm()) { %> <% foreach (string value in Model) { %> <%= value %><br /> <% } %> <%= Html.EditorForModel() %> <input type="submit" value="Append Dashes" /> <% } %> </asp:Content> And the problem is that when the post back is made from the view, it hits the correct action, but the text boxes still show the original hello world data while the foreach loop outputs the new values. It feels like something in ASP.NET is overriding my model values from updating the text boxes and they are just displaying the same old values. I found this issue while trying to learn EditorFor with an IEnumerable.

    Read the article

  • Django - provide additional information in template

    - by Ninefingers
    Hi all, I am building an app to learn Django and have started with a Contact system that currently stores Contacts and Addresses. C's are a many to many relationship with A's, but rather than use Django's models.ManyToManyField() I've created my own link-table providing additional information about the link, such as what the address type is to the that contact (home, work etc). What I'm trying to do is pass this information out to a view, so in my full view of a contact I can do this: def contact_view_full(request, contact_id): c = get_object_or_404(Contact, id=contact_id) a = [] links = ContactAddressLink.objects.filter(ContactID=c.id) for link in links: b = Address.objects.get(id=link.AddressID_id) a.append(b) return render_to_response('contact_full.html', {'contact_item': c, 'addresses' : a }, context_instance=RequestContext(request)) And so I can do the equivalent of c.Addresses.all() or however the ManyToManyField works. What I'm interested to know is how can I pass out information about the link in the link object with the 'addresses' : a information, so that when my template does this: {% for address in addresses %} <!-- ... --> {% endfor %} and properly associate the correct link object data with the address. So what's the best way to achieve this? I'm thinking a union of two objects might be an idea but I haven't enough experience with Django to know if that's considered the best way of doing it. Suggestions? Thanks in advance. Nf

    Read the article

  • Assistance with building an inverted-index

    - by tipu
    It's part of an information retrieval thing I'm doing for school. The plan is to create a hashmap of words using the the first two letters of the word as a key and any words with the two letters saved as a string value. So, hashmap["ba"] = "bad barley base" Once I'm done tokenizing a line I take that hashmap, serialize it, and append it to the text file named after the key. The idea is that if I take my data and spread it over hundreds of files I'll lessen the time it takes to fulfill a search by lessening the density of each file. The problem I am running into is when I'm making 100+ files in each run it happens to choke on creating a few files for whatever reason and so those entries are empty. Is there any way to make this more efficient? Is it worth continuing this, or should I abandon it? I'd like to mention I'm using PHP. The two languages I know relatively intimately are PHP and Java. I chose PHP because the front end will be very simple to do and I will be able to add features like autocompletion/suggested search without a problem. I also see no benefit in using Java. Any help is appreciated, thanks.

    Read the article

  • Navigating between pages in a Facebook Platform iframe application

    - by Jimmy Cuadra
    I'm working on a Facebook Platform application that runs in iframe mode, and I'm having trouble understanding how to navigate between pages within the app. Let's say the first page that is loaded within the iframe at my canvas URL is one.html. Within that page, there is a link to two.html that just changes the source of the iframe and doesn't reload the Facebook chrome. When I do this, all the Facebook fb_sig_* query string parameters that Facebook passes to the original page aren't included, and so two.html has no awareness of the connection to Facebook and no ability to make API calls to generate the content for the page. One possible solution would be to manually extract all the Facebook parameters from one.html and append it to the link to two.html myself. This seems really ugly and I figured there had to be a cleaner way. For reference, my application is written in Perl and uses the WWW::Facebook::API module as a client library. I didn't see anything in it that I can use to easily reconstruct the Facebook parameters for use with links in iframe apps. Another possible solution would be to store all the Facebook parameters in a session on my server on the first page load, and just use the values in that session on subsequent page views. But what happens if the data I've stored no longer matches what Facebook would have sent if it were a completely new request (i.e. something in the user's Facebook session changed)? Is there something obvious I'm missing? What is the standard approach to navigating between pages within an iframe app? Facebook's documentation is atrocious and I haven't been able to find anything that clearly explains how this works. I also realize this wouldn't be an issue with an app using FBML instead of an iframe, but my understanding is that iframe apps are now encouraged over FBML apps, though again this seems ambiguous since so much of Facebook's documentation is outdated and contradictory.

    Read the article

  • Make Python Socket Server More Efficient

    - by BenMills
    I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background. Here is my code in full: http://gist.github.com/332132 I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this? My full code: import select import socket import sys import threading from daemon import Daemon class Server: def __init__(self): self.host = '' self.port = 9998 self.backlog = 5 self.size = 1024 self.server = None self.threads = [] self.send_count = 0 def open_socket(self): try: self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.server.bind((self.host,self.port)) self.server.listen(5) print "Server Started..." except socket.error, (value,message): if self.server: self.server.close() print "Could not open socket: " + message sys.exit(1) def remove_thread(self, t): t.join() def send_to_children(self, msg): self.send_count = 0 for t in self.threads: t.send_msg(msg) print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads)) def run(self): self.open_socket() input = [self.server,sys.stdin] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == self.server: # handle the server socket c = Client(self.server.accept(), self) c.start() self.threads.append(c) print "Num of clients: "+str(len(self.threads)) self.server.close() for c in self.threads: c.join() class Client(threading.Thread): def __init__(self,(client,address), server): threading.Thread.__init__(self) self.client = client self.address = address self.size = 1024 self.server = server self.running = True def send_msg(self, msg): if self.running: self.client.send(msg) self.server.send_count += 1 def run(self): while self.running: data = self.client.recv(self.size) if data: print data self.server.send_to_children(data) else: self.running = False self.server.threads.remove(self) self.client.close() """ Run Server """ class DaemonServer(Daemon): def run(self): s = Server() s.run() if __name__ == "__main__": d = DaemonServer('/var/servers/fserver.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: d.start() elif 'stop' == sys.argv[1]: d.stop() elif 'restart' == sys.argv[1]: d.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2)

    Read the article

  • elisp compile, add a regexp to error detection

    - by Gauthier
    I am starting with emacs, and don't know much elisp. Nearly nothing, really. I want to use ack as a replacement of grep. These are the instructions I followed to use ack from within emacs: http://www.rooijan.za.net/?q=ack_el Now I don't like the output format that is used in this el file, I would like the output to be that of ack --group. So I changed: (read-string "Ack arguments: " "-i" nil "-i" nil) to: (read-string "Ack arguments: " "-i --group" nil "-i --group" nil) So far so good. But this made me lose the ability to click-press_enter on the rows of the output buffer. In the original behaviour, compile-mode was used to be able to jump to the selected line. I figured I should add a regexp to the ack-mode. The ack-mode is defined like this: (define-compilation-mode ack-mode "Ack" "Specialization of compilation-mode for use with ack." nil) and I want to add the regexp [0-9]+: to be detected as an error too, since it is what every row of the output bugger includes (line number). I've tried to modify the define-compilation-modeabove to add the regexp, but I failed miserably. How can I make the output buffer of ack let me click on its rows? --- EDIT, I tried also: --- (defvar ack-regexp-alist '(("[0-9]+:" 2 3)) "Alist that specifies how to match rows in ack output.") (setq compilation-error-regexp-alist (append compilation-error-regexp-alist ack-regexp-alist)) I stole that somewhere and tried to adapt to my needs. No luck.

    Read the article

  • Use the Django ORM in a standalone script (again)

    - by Rishabh Manocha
    I'm trying to use the Django ORM in some standalone screen scraping scripts. I know this question has been asked before, but I'm unable to figure out a good solution for my particular problem. I have a Django project with defined models. What I would like to do is use these models and the ORM in my scraping script. My directory structure is something like this: project scrape #scraping scripts ... test.py web django_project settings.py ... #Django files I tried doing the following in project/scrape/test.py: print os.path.join(os.path.abspath('..'), 'web', 'django_project') sys.path.append(os.path.join(os.path.abspath('..'), 'web', 'django_project')) print sys.path print "-------" os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings' #print os.environ from django_project.myapp.models import MyModel print MyModel.objects.count() However, I get an ImportError when I try to run test.py: Traceback (most recent call last): File "test.py", line 12, in <module> from django_project.myapp.models import MyModel ImportError: No module named django_project.myapp.models One solution I found around this problem is to create a symbolic link to ../web/govcheck in the scrape folder: :scrape rmanocha$ ln -s ../web/govcheck ./govcheck With this, I can then run test.py just fine. However, this seems like a hack, and more importantly, is not very portable (I will have to create this symbolic link everywhere I run this code). So, I was wondering if anyone has any better solutions for my problem?

    Read the article

  • javascript using 'this' in global object

    - by Marco Demaio
    What does 'this' keyword refer to when used in gloabl object? Let's say for instance we have: var SomeGlobalObject = { rendered: true, show: function() { /* I should use 'SomeGlobalObject.rendered' below, otherwise it won't work when called from event scope. But it works when called from timer scope!! How can this be? */ if(this.rendered) alert("hello"); } } Now if we call in an inline script in the HTML page: SomeGlobalObject.show(); window.setTimeout("Msg.show()", 1000); everything work ok. But if we do something like AppendEvent(window, 'load', Msg.show); we get an error because this.rendered is undefined when called from the event scope. Do you know why this happens? Could you explain then if there is another smarter way to do this without having to rewrite every time "SomeGlobalObject.someProperty" into the the SomeGlobalObject code? Thanks! AppendEvent is just a simple cross-browser function to append an event, code below, but it does not matter in order to answer the above questions. function AppendEvent(html_element, event_name, event_function) { if(html_element.attachEvent) //IE return html_element.attachEvent("on" + event_name, event_function); else if(html_element.addEventListener) //FF html_element.addEventListener(event_name, event_function, false); }

    Read the article

  • PackageMaker install script for rxtx

    - by vinzenzweber
    I am using PackageMaker to create an installer for my application. During installation I need to run a bash script to properly install rxtx, a JNI library for serial port communication. This library needs to have the directory /var/lock in place with user "root" and group "uucp". The installation script also needs to add the current user to the group "uucp" for the lib to be able to write to /var/lock. Now when I run my application installer the preinstall script is run as root. Therefore "whoami" returns root instead of the user who is actually running the installer. The result is that rxtx is not able to create lock files in /var/lock because the actual user was not added as a member to "uucp". How can I get the user while my script is run by the installer. Or is it better to set the permissions for /var/lock to a different group maybe? Any suggestions are welcome! !/bin/sh curruser=whoami logger "Setting permissions for /var/lock for user $curruser!" if [ ! -d /var/lock ] then logger "Creating /var/lock!" sudo mkdir /var/lock fi sudo chgrp uucp /var/lock sudo chmod 775 /var/lock MacOSX 10.5 and later use dscl if [ sudo dscl . -read /Groups/uucp GroupMembership | grep $curruser | wc -l = "0" ] then logger "Add user $curruser to /Groups/uucp!" sudo dscl . -append /Groups/uucp GroupMembership $curruser # to revert use: # sudo dscl . -delete /Groups/uucp GroupMembership $curruser else logger "GroupMembership of /var/lock not changed!" fi

    Read the article

  • Parsing srt subtitles

    - by Vojtech R.
    Hi, I want to parse srt subtitles: 1 00:00:12,815 --> 00:00:14,509 Chlapi, jak to jde s tema pracovníma svetlama?. 2 00:00:14,815 --> 00:00:16,498 Trochu je zesilujeme. 3 00:00:16,934 --> 00:00:17,814 Jo, sleduj. Every item into structure. With this regexs: A: RE_ITEM = re.compile(r'''(?P<index>\d+).(?P<start>\d{2}:\d{2}:\d{2},\d{3}) --> (?P<end>\d{2}:\d{2}:\d{2},\d{3}).(?P<text>.*?)''', re.DOTALL) B: RE_ITEM = re.compile(r'''(?P<index>\d+).(?P<start>\d{2}:\d{2}:\d{2},\d{3}) --> (?P<end>\d{2}:\d{2}:\d{2},\d{3}).(?P<text>.*)''', re.DOTALL) And this code: for i in Subtitles.RE_ITEM.finditer(text): result.append((i.group('index'), i.group('start'), i.group('end'), i.group('text'))) With code B I have only one item in array (because of greedy .*) and with code A I have empty 'text' because of no-greedy .*? How to cure this? Thanks

    Read the article

  • jquery use return data from 2 functions in another function ---always get undefined. why ??

    - by user253530
    Function socialbookmarksTableData(data) is called by another function to generate the content of a table -- data is a JSON object. Inside the function i call 2 other functions that use getJSON and POST (with json as a return object) to get some data. The problem is: though the functions execute correctly i get undefined value for the 2 variables (bookmarkingSites, bookmarkCategories). Help with a solution please. function socialbookmarksGetBookmarkCategories(bookmarkID) { var toReturn = ''; $.post("php/socialbookmark-get-bookmark-categories.php",{ bookmarkID: bookmarkID },function(data){ $.each(data,function(i,categID){ toReturn += '<option value ="' + data[i].categID + '">' + data[i].categName + '</option>'; }) return toReturn; },"JSON"); } function socialbookmarksGetBookmarkSites() { var bookmarkingSites = ''; $.getJSON("php/socialbookmark-get-bookmarking-sites.php",function(bookmarks){ for(var i = 0; i < bookmarks.length; i++){ //alert( bookmarks[i].id); bookmarkingSites += '<option value = "' + bookmarks[i].id + '">' + bookmarks[i].title + '</option>'; } return bookmarkingSites; }); } function socialbookmarksTableData(data) { var toAppend = ''; var bookmarkingSites = socialbookmarksGetBookmarkSites(); $.each(data.results, function(i, id){ var bookmarkCategories = socialbookmarksGetBookmarkCategories(data.results[i].bookmarkID); //rest of the code is not important }); $("#searchTable tbody").append(toAppend); }

    Read the article

  • Problems reading RSS feed with jQuery.get()

    - by bbeckford
    Hi there, I've been pulling my hair out trying to use jQuery.get() to pull in my dynamically generated RSS feed and I'm having nothing but issues, is my RSS feed the wrong format? If so can I convert it to the correct format using javascript? Here's my feed: http://dev.chriscurddesign.co.uk/burns/p/rc_rss.php?rcf_id=0 Here's my code: function get_rss_feed() { $(".content").empty(); $.get("http://dev.chriscurddesign.co.uk/burns/p/rc_rss.php?rcf_id=0", function(d) { var i = 0; $(d).find('item').each(function() { var $item = $(this); var title = $item.find('title').text(); var link = $item.find('link').text(); var location = $item.find('location').text(); var pubDate = $item.find('pubDate').text(); var html = '<div class="entry"><a href="' + link + '" target="_blank">' + title + '</a></div>'; $('.content').append(html); i++; }); }); }; Any input would be greatly appreciated!! Thanks

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Does jQuery strip some html elements from a string when using .html()?

    - by Nic Hubbard
    I have a var that contains a full html page, including the head, html, body, etc. When I pass that string into the .html() function, jQuery strips out all those elements, such as body, html, head, etc, which I don't want. My data var contains: <html> <head> <title>Untitled Document</title> </head> <body> </body> </html> // data is a full html document string data = $('<div/>').html(data); // jQuery stips my document string! alert(data.find('head').html()); I am needing to manipulate a full html page string, so that I can return what is in the element. I would like to do this with jQuery, but it seems all of the methods, append(), prepend() and html() all try to convert the string to dom elements, which remove all the other parts of a full html page. Is there another way that I could do this? I would be fine using another method. My final goal is to find certain elements inside my string, so I figured jQuery would be best, since I am so used to it. But, if it is going to trim and remove parts of my string, I am going to have to look for another method. Ideas?

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • onMouseOver not working fine with hover

    - by Lina
    Hi, i'm trying to get a popup window when hovering a div by calling the following function onMouseOver function PopUp(h) { $('#task_' + h).hover(function (evt) { var html1 = '<div id="box">'; html1 += '<h4>Taskbar ' + h + ' ännu en test - fredagstest </h4>'; //html += '<img src="Pictures/DesertMini.jpg" alt="image"/>'; html1 += '<p>"Test för task nr: ' + h + ' <br/>At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium asperiores repellat."</p>'; html1 += '</div>'; $('#test').append(html1).children('#box').hide().fadeIn(100) $('#box') .css('top', evt.pageY) .css('left', evt.pageX + 20) .css('z-index', 1000000); }, function () { // mouse out $('#box').remove(); }); $('#task_' + h).mousemove(function (evt) { $('#box') .css('top', evt.pageY) .css('left', evt.pageX + 20) .css('z-index', 1000000); }); } } h is some number I'm sending to the function <div (some attributes) onMouseOver="PopUp('+someNumber+');"> but the onMouseOver is not working fine with the hover what do i do? thanks a trillion in advance... Lina

    Read the article

  • Understanding Javascript's difference between calling a function, and returning the function but executing it later.

    - by Squeegy
    I'm trying to understand the difference between foo.bar() and var fn = foo.bar; fn(); I've put together a little example, but I dont totally understand why the failing ones actually fail. var Dog = function() { this.bark = "Arf"; }; Dog.prototype.woof = function() { $('ul').append('<li>'+ this.bark +'</li>'); }; var dog = new Dog(); // works, obviously dog.woof(); // works (dog.woof)(); // FAILS var fnWoof = dog.woof; fnWoof(); // works setTimeout(function() { dog.woof(); }, 0); // FAILS setTimeout(dog.woof, 0); Which produces: Arf Arf undefined Arf undefined On JSFiddle: http://jsfiddle.net/D6Vdg/1/ So it appears that snapping off a function causes it to remove it's context. Ok. But why then does (dog.woof)(); work? It's all just a bit confusing figuring out whats going on here. There are obviously some core semantics I'm just not getting.

    Read the article

  • Different users get the same value in .ASPXANONYMOUS

    - by Malcolm Frexner
    My site allows anonymous users. I saw that under heavy load user get sometimes profile values from other users. This happens for anonymous users. I logged the access to profile data: /// <summary> /// /// </summary> /// <param name="controller"></param> /// <returns></returns> public static string ProfileID(this Controller controller ) { if (ApplicationConfiguration.LogProfileAccess) { StringBuilder sb = new StringBuilder(); (from header in controller.Request.Headers.ToPairs() select string.Concat(header.Key, ":", header.Value, ";")).ToList().ForEach(x => sb.Append(x)); string log = string.Format("ip:{0} url:{1} IsAuthenticated:{2} Name:{3} AnonId:{4} header:{5}", controller.Request.UserHostAddress, controller.Request.Url.ToString(), controller.Request.IsAuthenticated, controller.User.Identity.Name, controller.Request.AnonymousID, sb); _log.Debug(log); } return controller.Request.IsAuthenticated ? controller.User.Identity.Name : controller.Request.AnonymousID; } I can see in the log that user realy get the same cookievalue for .ASPXANONYMOUS even if they have different IP. Just to be safe I removed dependency injection for the FormsAuthentication. I dont use OutputCaching. My web.config has this setting for authentication: <anonymousIdentification enabled="true" cookieless="UseCookies" cookieName=".ASPXANONYMOUS" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" /> <authentication mode="Forms"> <forms loginUrl="~/de/Account/Login" /> </authentication> Does anybody have an idea what else I could log or what I should have a look at?

    Read the article

  • How to avoid saving a blank model which attributes can be blank

    - by auralbee
    Hello people, I have two models with a HABTM association, let´s say book and author. class Book has_and_belongs_to_many :authors end class Author has_and_belongs_to_many :books end The author has a set of attributes (e.g. first-name,last-name,age) that can all be blank (see validation). validates_length_of :first_name, :maximum => 255, :allow_blank => true, :allow_nil => false In the books_controller, I do the following to append all authors to a book in one step: @book = Book.new(params[:book]) @book.authors.build(params[:book][:authors].values) My question: What would be the easiest way to avoid the saving of authors which fields are all blank to prevent too much "noise" in the database? At the moment, I do the following: validate :must_have_some_data def must_have_some_data empty = true hash = self.attributes hash.delete("created_at") hash.delete("updated_at") hash.each_value do |value| empty = false if value.present? end if (empty) errors.add_to_base("Fields do not contain any data.") end end Maybe there is an more elegant, Rails-like way to do that. Thanks.

    Read the article

  • Use of SAX parser in Android - OutOfMemory Issue

    - by Sephy
    Hi everybody, I have been using a SAX parser for a while now to get data from various XML, but today i'm banging my head on a new problem with a hudge XML (compared to the previous ones . here around 12k lines) with a lot of repetitive items in it. Most of the time, the items are part of a block : <content> <item lbl="blabla"> <item lbl="blabla"/> <item lbl="blabla"/> </item> <item lbl="blabla"> <item lbl="blabla"/> <item lbl="blabla"/> <item lbl="blabla"/> <item lbl="blabla"/> <item lbl="blabla"/> <item lbl="blabla"/> </item> </content> The blabla part is of course changing...But, I would like to keep the structure of items (they are titles and subtitles). And for that, I append each blabla with a starting and ending tag blabla, where x is the position in the tree of items (1, 2, 3 or 4). The slightly problematic part is that with that, I'm creating thousands of useless objects and the garbage collector doesn't have time to clean after the parser, and the inevitable OutOfMemory comes in my face... I have no idea of how to deal with it; The best technique would be if I could take the whole content of , but i'm not sure that this is possible with a SAX parser. Any help is welcome and any solution deeply thanked...

    Read the article

  • What purpose does “using” serve when used the following way

    - by user287745
    What purpose does “using” serve when used the following way:- ONE EXAMPLE IS THIS, (AN ANSWERER- @richj - USED THIS CODE TO SOLVE A PROBLEM THANKS) private Method(SqlConnection connection) { using (SqlTransaction transaction = connection.BeginTransaction()) { try { // Use the connection here .... transaction.Commit(); } catch { transaction.Rollback(); throw; } } } OTHER EXAMPLE I FOUND WHILE READING ON MICROSOFT SUPPORT SITE public static void ShowSqlException(string connectionString) { string queryString = "EXECUTE NonExistantStoredProcedure"; StringBuilder errorMessages = new StringBuilder(); using (SqlConnection connection = new SqlConnection(connectionString)) { SqlCommand command = new SqlCommand(queryString, connection); try { command.Connection.Open(); command.ExecuteNonQuery(); } catch (SqlException ex) { for (int i = 0; i < ex.Errors.Count; i++) { errorMessages.Append("Index #" + i + "\n" + "Message: " + ex.Errors[i].Message + "\n" + "LineNumber: " + ex.Errors[i].LineNumber + "\n" + "Source: " + ex.Errors[i].Source + "\n" + "Procedure: " + ex.Errors[i].Procedure + "\n"); } Console.WriteLine(errorMessages.ToString()); } } } I AM doing at top of page as using system.data.sqlclient etc so why this using thing in middle of code, What if I omit it (I know the code will work) but what functionality will I be losing

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >