Search Results

Search found 7240 results on 290 pages for 'natural join'.

Page 199/290 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • Error when pushing to Heroku - ...appear in group - Ruby on Rails

    - by bgadoci
    I am trying to deploy my first rails app to Heroku and seem to be having a problem. After git push heroku master, and heroku rake db:migrate I get an error saying: SELECT posts.*, count(*) as vote_total FROM "posts" INNER JOIN "votes" ON votes.post_id = posts.id GROUP BY votes.post_id ORDER BY created_at DESC LIMIT 5 OFFSET 0): I have included the full error below and also included the PostControll#index as it seems that is where I am doing the grouping. Lastly I included my routes.rb file. I am new to ruby, rails, and heroku so sorry for simple/obvious questions. Processing PostsController#index (for 99.7.50.140 at 2010-04-21 12:50:47) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: column "posts.id" must appear in the GROUP BY clause or be used in an aggregate function : SELECT posts.*, count(*) as vote_total FROM "posts" INNER JOIN "votes" ON votes.post_id = posts.id GROUP BY votes.post_id ORDER BY created_at DESC LIMIT 5 OFFSET 0): vendor/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:82:in `send' vendor/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:82:in `paginate' vendor/gems/will_paginate-2.3.12/lib/will_paginate/collection.rb:87:in `create' vendor/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:76:in `paginate' app/controllers/posts_controller.rb:28:in `index' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end routes.rb ActionController::Routing::Routes.draw do |map| map.resources :ugtags map.resources :wysihat_files map.resources :users map.resources :votes map.resources :votes, :belongs_to => :user map.resources :tags, :belongs_to => :user map.resources :ugtags, :belongs_to => :user map.resources :posts, :collection => {:auto_complete_for_tag_tag_name => :get } map.resources :posts, :sessions map.resources :posts, :has_many => :comments map.resources :posts, :has_many => :tags map.resources :posts, :has_many => :ugtags map.resources :posts, :has_many => :votes map.resources :posts, :belongs_to => :user map.resources :tags, :collection => {:auto_complete_for_tag_tag_name => :get } map.resources :ugtags, :collection => {:auto_complete_for_ugtag_ugctag_name => :get } map.login 'login', :controller => 'sessions', :action => 'new' map.logout 'logout', :controller => 'sessions', :action => 'destroy' map.root :controller => "posts" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end UPDATE TO SHOW MODEL AND MIGRATION FOR POST class Post < ActiveRecord::Base has_attached_file :photo validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy has_many :ugtags, :dependent => :destroy has_many :votes, :dependent => :destroy belongs_to :user after_create :self_vote def self_vote # I am assuming you have a user_id field in `posts` and `votes` table. self.votes.create(:user => self.user) end cattr_reader :per_page @@per_page = 10 end migrations for post class CreatePosts < ActiveRecord::Migration def self.up create_table :posts do |t| t.string :title t.text :body t.timestamps end end def self.down drop_table :posts end end _ class AddUserIdToPost < ActiveRecord::Migration def self.up add_column :posts, :user_id, :string end def self.down remove_column :posts, :user_id end end

    Read the article

  • Academic question: typename

    - by Arman
    Hi, recently I accounted with a "simple problem" of porting code from VC++ to gcc/intel. The code is compiles w/o error on VC++: #include <vector> using std::vector; template <class T> void test_vec( std::vector<T> &vec) { typedef std::vector<T> M; /*==> add here typename*/ M::iterator ib=vec.begin(),ie=vec.end(); }; int main() { vector<double> x(100, 10); test_vec<double>(x); return 0; } then with g++ we have some unclear errors: g++ t.cpp t.cpp: In function 'void test_vec(std::vector<T, std::allocator<_CharT> >&)': t.cpp:13: error: expected `;' before 'ie' t.cpp: In function 'void test_vec(std::vector<T, std::allocator<_CharT> >&) [with T = double]': t.cpp:18: instantiated from here t.cpp:12: error: dependent-name 'std::M::iterator' is parsed as a non-type, but instantiation yields a type t.cpp:12: note: say 'typename std::M::iterator' if a type is meant If we add typename before iterator the code will compile w/o pb. If it is possible to make a compiler which can understand the code written in the more "natural way", then for me is unclear why we should add typename? Which rules of "C++ standards"(if there are some) will be broken if we allow all compilers to use without "typename"? kind regards Arman.

    Read the article

  • Isn't INT more efficient than UNIQUEIDENTIFIER?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation?

    Read the article

  • LINQ To Entities - Items, ItemCategories & Tags

    - by Simon
    Hi There, I have an Entity Model that has Items, which can belong to one or more categories, and which can have one or more tags. I need to write a query that finds all tags for a given ItemCategory, preferably with a single call to the database, as this is going to get called fairly often. I currently have: Dim q = (From ic In mContext.ItemCategory _ Where ic.CategoryID = forCategoryID _ Select ic).SelectMany(Function(cat) cat.Items).SelectMany(Function(i) i.Tags) _ .OrderByDescending(Function(t) t.Items.Count).ToList This is nearly there, apart from it doesn't contain the items for each tag, so I'd have to iterate through, loading the item reference to find out how many items each tag is related to (for font sizing). Ideally, I want to return a List(Of TagCount), which is just a structure, containing Tag as string, and Count as integer. I've looked at Group and Join, but I'm not getting anywhere so any help would be greatly appreciated!

    Read the article

  • Filter a LINQ query

    - by Jack Marchetti
    Here's my query. var query = from g in dc.Group join gm in dc.GroupMembers on g.ID equals gm.GroupID where gm.UserID == UserID select new { id = g.ID, name = g.Name, pools = (from pool in g.Pool // more stuff to populate pools So I have to perform some filtering, but when I attempt to filter var filter = query.Where(f => f.pools.[no access to list of columns] I can't access any of the items within "pools". Does anyone know how I'm able to access that? What I'd like to do is this: var filterbyGame = query.Where(f = > f.pools.GameName == "TestGame"); Let me know if that's even possible with thew ay I have this setup. Thanks guys.

    Read the article

  • How to force c# binary int division to return a double?

    - by Wayne
    How to force double x = 3 / 2; to return 1.5 in x without the D suffix or casting? Is there any kind of operator overload that can be done? Or some compiler option? Amazingly, it's not so simple to add the casting or suffix for the following reason: Business users need to write and debug their own formulas. Presently C# is getting used like a DSL (domain specific language) in that these users aren't computer science engineers. So all they know is how to edit and create a few types of classes to hold their "business rules" which are generally just math formulas. But they always assume that double x = 3 / 2; will return x = 1.5 however in C# that returns 1. A. they always forget this, waste time debugging, call me for support and we fix it. B. they think it's very ugly and hurts the readability of their business rules. As you know, DSL's need to be more like natural language. Yes. We are planning to move to Boo and build a DSL based on it but that's down the road. Is there a simple solution to make double x = 3 / 2; return 1.5 by something external to the class so it's invisible to the users? Thanks! Wayne

    Read the article

  • How to force c# binary int division to return a double?

    - by Wayne
    How to force double x = 3 / 2; to return 1.5 in x without the D suffix or casting? Is there any kind of operator overload that can be done? Or some compiler option? Amazingly, it's not so simple to add the casting or suffix for the following reason: Business users need to write and debug their own formulas. Presently C# is getting used like a DSL (domain specific language) in that these users aren't computer science engineers. So all they know is how to edit and create a few types of classes to hold their "business rules" which are generally just math formulas. But they always assume that double x = 3 / 2; will return x = 1.5 however in C# that returns 1. A. they always forget this, waste time debugging, call me for support and we fix it. B. they think it's very ugly and hurts the readability of their business rules. As you know, DSL's need to be more like natural language. Yes. We are planning to move to Boo and build a DSL based on it but that's down the road. Is there a simple solution to make double x = 3 / 2; return 1.5 by something external to the class so it's invisible to the users? Thanks! Wayne

    Read the article

  • Bash Templating: How to build configuration files from templates with Bash?

    - by FractalizeR
    Hello. I'm writting a script to automate creating configuration files for Apache and PHP for my own webserver. I don't want to use any GUIs like CPanel or ISPConfig. I have some templates of Apache and PHP configuration files. Bash script needs to read templates, make variable substitution and output parsed templates into some folder. What is the best way to do that? I can think of several ways. Which one is the best or may be there are some better ways to do that? I want to do that in pure Bash (it's easy in PHP for example) 1)http://stackoverflow.com/questions/415677/how-to-repace-variables-in-a-nix-text-file template.txt: the number is ${i} the word is ${word} script.sh: #!/bin/sh #set variables i=1 word="dog" #read in template one line at the time, and replace variables #(more natural (and efficient) way, thanks to Jonathan Leffler) while read line do eval echo "$line" done < "./template.txt" BTW, how do I redirect output to external file here? Do I need to escape something if variables contain, say, quotes? 2) Using cat & sed for replacing each variable with it's value: Given template.txt: The number is ${i} The word is ${word} Command: cat template.txt | sed -e "s/\${i}/1/" | sed -e "s/\${word}/dog/" Seems bad to me because of the need to escape many different symbols and with many variables the line will be tooooo long. Can you think of some other elegant and safe solution?

    Read the article

  • How do I find the most “Naturally" direct route using A-star (A*)

    - by Greg B
    I have implemented the A* algorithm in AS3 and it works great except for one thing. Often the resulting path does not take the most “natural” or smooth route to the target. In my environment the object can move diagonally as inexpensively as it can move horizontally or vertically. Here is a very simple example; the start point is marked by the S, and the end (or finish) point by the F. | | | | | | | | | | |S| | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | As you can see, during the 1st round of finding, nodes [0,2], [1,2], [2,2] will all be added to the list of possible node as they all have a score of N. The issue I’m having comes at the next point when I’m trying to decide which node to proceed with. In the example above I am using possibleNodes[0] to choose the next node. If I change this to possibleNodes[possibleNodes.length-1] I get the following path. | | | | | | | | | | |S| | | | | | | | | | |x| | | | | | | | | | |x| | | | | | | | | | |x| | | | | | | | |x| | | | | | | | |x| | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | And then with possibleNextNodes[Math.round(possibleNextNodes.length / 2)-1] | | | | | | | | | | |S| | | | | | | | | |x| | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | All these paths have the same cost as they all contain the same number of steps but, in this situation, the most sensible path would be as follows... | | | | | | | | | | |S| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Is there a formally accepted method of making the path appear more sensible rather than just mathematically correct?

    Read the article

  • Rails eager loading

    - by Dimitar Vouldjeff
    HI, I have a Test model, which has_many questions, and Question, which has_many answers... When I make a query for a Test with :include = [:questions, {:questions = :answers}] ActiveRecord makes two more queries to fetch the questions and then to fetch the answers - it doesn`t join them!!! When I do the query with :joins ActiveRecord makes the query, but later when I need the Test.questions or Test.questions.answers ActiveRecord makes again those 2 extra queries!!! And later when I enumerate the questions or answers in the log I see other queries for each object, but it has Cache tag... Is this normal?

    Read the article

  • SQL Server: avoiding hard coding of database name in cross-database views

    - by codeulike
    So, lets say you have two SQL Server Databases on the same server that reference each others tables in their Views, Functions and Stored Procedures. You know, things like this: use database_foo create view spaghetti as select f.col1, c.col2 from fusilli f inner join database_bar.dbo.conchigli c on f.id = c.id (I know that cross-database views are not very good practice, but lets just say you're stuck with it) Are there any good techniques to avoid 'hard coding' the database names? (So that should you need to occasionally re-point to a different database - for testing perhaps - you don't need to edit loads of views, fns, sps) I'm interested in SQL 2005 or SQL 2008 solutions. Cheers.

    Read the article

  • Are keys and values of %INC platform-dependent or not?

    - by codeholic
    I'd like to get the full filename of an included module. Consider this code: package MyTest; my $path = join '/', split /::/, __PACKAGE__; $path .= ".pm"; print "$INC{$path}\n"; 1; $ perl -Ipath/to/module -MMyTest -e0 path/to/module/MyTest.pm Will it work on all platforms? perlvar The hash %INC contains entries for each filename included via the do, require, or useoperators. The key is the filename you specified (with module names converted to pathnames), and the value is the location of the file found. Are these keys platform-dependent or not? Should I use File::Spec or what? At least ActivePerl on win32 uses / instead of \. Update: What about %INC values? Are they platform-dependent?

    Read the article

  • Create signed urls for CloudFront with Ruby

    - by wiseleyb
    History: I created a key and pem file on Amazon. I created a private bucket I created a public distribution and used origin id to connect to the private bucket: works I created a private distribution and connected it the same as #3 - now I get access denied: expected I'm having a really hard time generating a url that will work. I've been trying to follow the directions described here: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?PrivateContent.html This is what I've got so far... doesn't work though - still getting access denied: def url_safe(s) s.gsub('+','-').gsub('=','_').gsub('/','~').gsub(/\n/,'').gsub(' ','') end def policy_for_resource(resource, expires = Time.now + 1.hour) %({"Statement":[{"Resource":"#{resource}","Condition":{"DateLessThan":{"AWS:EpochTime":#{expires.to_i}}}}]}) end def signature_for_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) policy = url_safe(policy_for_resource(resource, expires)) key = OpenSSL::PKey::RSA.new(File.readlines(private_key_file_name).join("")) url_safe(Base64.encode64(key.sign(OpenSSL::Digest::SHA1.new, (policy)))) end def expiring_url_for_private_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) sig = signature_for_resource(resource, key_id, private_key_file_name, expires) "#{resource}?Expires=#{expires.to_i}&Signature=#{sig}&Key-Pair-Id=#{key_id}" end resource = "http://d27ss180g8tp83.cloudfront.net/iwantu.jpeg" key_id = "APKAIS6OBYQ253QOURZA" pk_file = "doc/pk-APKAIS6OBYQ253QOURZA.pem" puts expiring_url_for_private_resource(resource, key_id, pk_file) Can anyone tell me what I'm doing wrong here?

    Read the article

  • Sending and receiving async over multiprocessing.Pipe() in Python

    - by dcolish
    I'm having some issues getting the Pipe.send to work in this code. What I would ultimately like to do is send and receive messages to and from the foreign process while its running in a fork. This is eventually going to be integrated into a pexpect loop for talking to interpreter processes. ` from multiprocessing import Process, Pipe def f(conn): cmd = '' if conn.poll(): cmd = conn.recv() i = 1 i += 1 conn.send([42 + i, cmd, 'hello']) if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() from pdb import set_trace; set_trace() while parent_conn.poll(): print parent_conn.recv() # prints "[42, None, 'hello']" parent_conn.send('OHHAI') p.join() `

    Read the article

  • cakephp and SQL_CALC_FOUND_ROWS

    - by Lizard
    I am trying to add the SQL_CALC_FOUND_ROWS into a query (Please note this isn't for pagination) please note I am trying to add this to a cakePHP query the code I currently have is below: return $this->find('all', array( 'conditions' => $conditions, 'fields'=>array('SQL_CALC_FOUND_ROWS','Category.*','COUNT(`Entity`.`id`) as `entity_count`'), 'joins' => array('LEFT JOIN `entities` AS Entity ON `Entity`.`category_id` = `Category`.`id`'), 'group' => '`Category`.`id`', 'order' => $sort, 'limit'=>$params['limit'], 'offset'=>$params['start'], 'contain' => array('Domain' => array('fields' => array('title'))) )); Note the 'fields'=>array('SQL_CALC_FOUND_ROWS',' this obviously doesn't work as It tries to apply the SQL_CALC_FOUND_ROWS to the table e.g. SELECTCategory.SQL_CALC_FOUND_ROWS, Is there anyway of doing this? Any help would be greatly appreciated, thanks.

    Read the article

  • Cookies with urllib

    - by CMC
    This will probably seem like a really simple question, and I am quite confused as to why this is so difficult for me. I would like to write a function that takes three inputs: [url, data, cookies] that will use urllib (not urllib2) to get the contents of the requested url. I figured it'd be simple, so I wrote the following: def fetch(url, data = None, cookies = None): if isinstance(data, dict): data = urllib.urlencode(data) if isinstance(cookies, dict): # TODO: find a better way to do this cookies = "; ".join([str(key) + "=" + str(cookies[key]) for key in cookies]) opener = urllib.FancyURLopener() opener.addheader("Cookie", cookies) obj = opener.open(url, data) result = obj.read() obj.close() return result This doesn't work, as far as I can tell (can anyone confirm that?) and I'm stumped.

    Read the article

  • Serving files inside of directories with ruby and mongrel

    - by AdamB
    I made a simple web server in Ruby using the Mongrel gem. It works great for files in a single directory, but I run into some path issues when inside a directory. Lets say we have 2 files in a directory named "dir1", "index.html" and "style.css". If we visit http://host/dir1/index.html, the file is found, but style.css isn't because it's trying to load "style.css" from http://host/style.css instead of inside the directory. index.html is trying to load style.css as if its in the same directory as itself. <link href="style.css" rel="stylesheet" type="text/css" /> I can get the file if I enter the full path of style.css: /dir1/style.css but it doesn't seem to remember it's already inside a directory and that no path before a filename should serve from the current directory. This is the ruby code im using to serve out files. require 'rubygems' require 'mongrel' server = Mongrel::HttpServer.new("0.0.0.0", "3000") server.register("/", Mongrel::DirHandler.new(".")) server.run.join

    Read the article

  • Proper snowball analyzer configuration when using Grails Searchable Plugin

    - by Wirsbro
    To improve stemming we want to switch from the default analyzer to snowball, however, having a lot of difficulty with the proper settings and would appreciate any help. In Environment: - Sun's Java 1.6.16 - Grails 1.2.2 - Searchable Plug-In 0.5.5 Config.groovy: Have tried both settings: compassSettings = ['compass.engine.analyzer.stemmed.type': 'snowball', 'compass.engine.analyzer.stemmed.name': 'English'] compassSettings = ['compass.engine.analyzer.snowball.type': 'snowball', 'compass.engine.analyzer.snowball.name': 'English', 'compass.engine.analyzer.search.type': 'snowball', 'compass.engine.analyzer.search.name': 'English'] Search.groovy - The Invocation: def searchResult = searchableService.search(params.q, withHighlighter: { highlighter, index, sr if (!sr.highlights) { sr.highlights = [] } try { sr.highlights[index] = highlighter.fragments("content")[0..2].join(" ") } catch (IndexOutOfBoundsException ex) { sr.highlights[index] = highlighter.fragment("content") } }) def suggestion = searchableService.suggestQuery(params.q) if (suggestion != params.q) { searchResult.suggestedQuery = suggestion }

    Read the article

  • Twisted Python getPage

    - by David Dixon II
    I tried to get support on this but I am TOTALLY confused. Here's my code: from twisted.internet import reactor from twisted.web.client import getPage from twisted.web.error import Error from twisted.internet.defer import DeferredList from sys import argv class GrabPage: def __init__(self, page): self.page = page def start(self, *args): if args == (): # We apparently don't need authentication for this d1 = getPage(self.page) else: if len(args) == 2: # We have our login information d1 = getPage(self.page, headers={"Authorization": " ".join(args)}) else: raise Exception('Missing parameters') d1.addCallback(self.pageCallback) dl = DeferredList([d1]) d1.addErrback(self.errorHandler) dl.addCallback(self.listCallback) def errorHandler(self,result): # Bad thingy! pass def pageCallback(self, result): return result def listCallback(self, result): print result a = GrabPage('http://www.google.com') data = a.start() # Not the HTML I wish to get the HTML out which is given to pageCallback when start() is called. This has been a pita for me. Ty! And sorry for my sucky coding.

    Read the article

  • How to get use text columns in a trigger

    - by Jeremy
    I am trying to use an update trigger in sql 2000 so that when an update occurs, I insert a row into a history table, so I retain all history on a table: CREATE Trigger trUpdate_MyTable ON MyTable FOR UPDATE AS INSERT INTO [MyTableHistory] ( [AuditType] ,[MyTable_ID] ,[Inserted] ,[LastUpdated] ,[LastUpdatedBy] ,[Vendor_ID] ,[FromLocation] ,[FromUnit] ,[FromAddress] ,[FromCity] ,[FromProvince] ,[FromContactNumber] ,[Comment]) SELECT [AuditType] = 'U', D.* FROM deleted D JOIN inserted I ON I.[ID] = D.[ID] GO Of course, I get an error "Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables." I tried joining to MyTable instead of deleted, but because the insert triger fires after the insert, it ends up inserting the new record into the history table, when I want the original record. How can I do this and still use text columns?

    Read the article

  • Filemaker - Getting field values from related table

    - by foobar
    I have the following setup in Filemaker Pro 10. Table1 with: id_table1, related_names Table2 with: id_table2, name, include and a joint-table with: id_table1, id_table2 Now I want either make related_names a calculated field or write a script that sets related_names to a comma separated list of all names which are connected through the joint-table and have Table2.include = True. So for example a data set could look like: Table1 id_table1, related_names 1, "foo,bar" 2, "foo" 3, "" joint-table id_table1, id_table2 1,1 1,2 1,3 2,1 Table2 id_table2, name, include 1, foo, True 2, bar, True 3, baz, False After searching the internet for a few hours the closest I came was a calculated field with list(join-table::id_table2) which gives me a list with a all the id_table2's. But now I would need to find the appropriate records in table2 and check the include field. I hope the problem is clear. any help is highly appreciated.

    Read the article

  • Map large integer to a phrase

    - by Alexander Gladysh
    I have a large and "unique" integer (actually a SHA1 hash). I want (for no other reason than to have fun) to find an algorithm to convert that SHA1 hash to a (pseudo-)English phrase. The conversion should be reversible (i.e., knowing the algorithm, one must be able to convert the phrase back to SHA1 hash.) The possible usage of the generated phrase: the human readable version of Git commit ID, like a motto for a given program version (which is built from that commit). (As I said, this is "for fun". I don't claim that this is very practical — or be much more readable than the SHA1 itself.) A better algorithm would produce shorter, more natural-looking, more unique phrases. The phrase need not make sense. I would even settle for a whole paragraph of nonsense. (Though quality — englishness — of a paragraph should probably be better than for a mere phrase.) A variation: it is OK if I will be able to work only with a part of hash. Say, first six digits is OK. Possible approach: In the past I've attempted to build a probability table (of words), and generate phrases as Markov chains, seeding the generator (picking branches from probability tree), according to the bits I read from the SHA. This was not very successful, the resulting phrases were too long and ugly. I'm not sure if this was a bug, or the general flaw in the algorithm, since I had to abandon it early enough. Now I'm thinking about attempting to solve the problem once again. Any advice on how to approach this? Do you think Markov chain approach can work here? Something else?

    Read the article

  • Why is my mysql database timestamp changing by itself?

    - by Scarface
    Hey guys quick question, I have an entry that I put in my database, and as I echo the value, the value in the database stays the same while the data echoed keeps increasing, which is messing up my function. If anyone knows whats going down, would appreciate any suggestions. <?php include("../includes/connection.php"); $query="SELECT * FROM points LEFT JOIN users ON points.user_id=users.id WHERE points.topic_id='82' AND users.username='gman'"; $check=mysql_query($query); while ($row=mysql_fetch_assoc($check)){ $points_id=$row['points_id']; echo $timestamp=$row['timestamp']; } ?>

    Read the article

  • SQL Server Table locks in long query - Solution: NoLock?

    - by Kovu
    a report in my application runs a query that needs between 5 - 15 seconds (constrained to count of rows that will be returned). The query has 8 joins to nearly all main-tables of my application (Customers, sales, units etc). A little tool shows me, that in this time, all those 8 tables are locked with a shared table lock. That means, no update operation will be done in this time. A solution from a friend is, to have every join in the query, which is not mandetory to have 100% correct data (dirty read), with a NoLock, so only 1 of this 8 tables will be locked completly. Is that a good solution? For a report in which 99% of data came from one table, unlock the less prio tables?

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >