Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 354/462 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • will a mysql query run slower if one of the tables involved has no index defined??

    - by lock
    there's this already populated database which came from another dev im not sure what went on that dev's mind when he created the tables, but on one of our scripts there is this query involving 4 tables and it runs super slow SELECT a.col_1, a.col_2, a.col_3, a.col_4, a.col_5, a.col_6, a.col_7 FROM a, b, c, d WHERE a.id = b.id AND b.c_id = c.id AND c.id = d.c_id AND a.col_8 = '$col_8' AND d.g_id = '$g_id' AND c.private = '1' NOTE: $col_8 and $g_id are variables from a form its only my theory that it's due to tables b and c not having an index, although im guessing that the dev didnt think that it was necessary since those tables only tell relations between a and d, where b tells that the data in a belongs to a certain user, and c tells that the user belongs to a group in d as you can see, there's not even a join or other extensive query functions used but this query which returns only around 100 rows takes 2 minutes to execute. anyway my question is simply this post's title. will a mysql query run slower if one of the tables involved has no index defined??

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • hibernate order by association

    - by Gary Kephart
    I'm using Hibernate 3.2, and using criteria to build a query. I'd like to add and "order by" for a many-to-one association, but I don't see how that can be done. The Hibernate query would end up looking like this, I guess: select t1.a, t1.b, t1.c, t2.dd, t2.ee from t1 inner join t2 on t1.a = t2.aa order by t2.dd <-- need to add this I've tried criteria.addOrder("assnName.propertyName") but it doesn't work. I know it can be done for normal properties. Am I missing something?

    Read the article

  • Flexible CMS for non-programmers

    - by Bunkerbewohner
    Hello! I'm looking for a content management system that allows creating single pages out of predefined blocks flexibly. For example I have a "product" block that is used to show producs on a page and it may appear numerous times on one page with different contents. But I also might wanna use it on different pages. Also I have simply generic blocks like multiple column text blocks (1 col, 2 col etc.) where I just want to insert this kind of structure into the page and enter any text. So I'm looking for a cms with someething like a building block / module concept for contents. I'm already searching the web but there are so many CMSs that I can't look into every one. So if anyone knows a solution that might be right for me, please tell me! Technology-wise it just has to run on Linux. If it's OpenSource / free that's great, but I might also pay for it, if it offers the features I want. Thanks for any hints in advance!

    Read the article

  • LLBL Gen Predicate Filter

    - by Neil
    I am new to LLBLGen Pro and am checking for duplicate, I have the following SQL: SQL: select a.TopicId,atc.TopicCategoryId,a.Headline from article a inner join ArticleTopicCategory atc on atc.ArticleId = a.Id where a.TopicId = 'C0064FAE-093B-466E-8745-230534867D2F' and a.Headline = 'Test' and atc.TopicCategoryId in ('004D64F7-474C-48F9-9887-17B1E7532A84') Whenever I step though my function, it always returns 0: LLBLGen Code: public bool CheckDuplicateArticle(Guid topicId, List<Guid> categories, string headline) { ArticleCollection articles = new ArticleCollection(); PredicateExpression filter = new PredicateExpression(); RelationCollection relation = new RelationCollection(); relation.Add(ArticleEntity.Relations.ArticleTopicCategoryEntityUsingArticleId); filter.AddWithAnd(ArticleFields.TopicId == topicId); filter.AddWithAnd(ArticleTopicCategoryFields.Id == categories); filter.AddWithAnd(ArticleFields.Headline == headline); articles.GetMulti(filter, 0, null, relation); return articles.Count > 0; } Any help would be appreciated!

    Read the article

  • SQL: select random row from table where the ID of the row isn't in another table?

    - by johnrl
    I've been looking at fast ways to select a random row from a table and have found the following site: http://74.125.77.132/search?q=cache:http://jan.kneschke.de/projects/mysql/order-by-rand/&hl=en&strip=1 What I want to do is to select a random url from my table 'urls' that I DON'T have in my other table 'urlinfo'.The query I am using now selects a random url from 'urls' but I need it modified to only return a random url that is NOT in the 'urlinfo' table. Heres the query: SELECT url FROM urls JOIN (SELECT CEIL(RAND() * (SELECT MAX(urlid) FROM urls ) ) AS urlid ) AS r2 USING(urlid); And the two tables: CREATE TABLE urls ( urlid INT NOT NULL AUTO_INCREMENT PRIMARY KEY, url VARCHAR(255) NOT NULL, ) ENGINE=INNODB; CREATE TABLE urlinfo ( urlid INT NOT NULL PRIMARY KEY, urlinfo VARCHAR(10000), FOREIGN KEY (urlid) REFERENCES urls (urlid) ) ENGINE=INNODB;

    Read the article

  • switch between two cursors based on parameter passed into stored procedure

    - by db83
    Hi, I have two cursors in my procedure that only differ on the table name that they join to. The cursor that is used is determined by a parameter passed into the procedure if (param = 'A') then DECLARE CURSOR myCursor IS SELECT x,y,z FROM table1 a, table2 b BEGIN FOR aRecord in myCursor LOOP proc2(aRecord.x, aRecord.y, aRecord.z); END LOOP; COMMIT; END; elsif (param = 'B') then DECLARE CURSOR myCursor IS SELECT x,y,z FROM table1 a, table3 b -- different table BEGIN FOR aRecord in myCursor LOOP proc2(aRecord.x, aRecord.y, aRecord.z); END LOOP; COMMIT; END; end if I don't want to repeat the code for the sake of one different table. Any suggestions on how to improve this? Thanks in advance

    Read the article

  • Checking if MySQL Database data does not exist

    - by Ben Sinclair
    I have my songs set-up in my MySQL database. Each song is is either assigned multiple locations or has no locations at all. Only the songs that either have no locations assigned in the database or have the location assigned to the ones specified below should be pulled from the database. Hopefully when you understand my query below it will make sense SELECT s.* FROM roster_songs AS s LEFT JOIN roster_songs_locations AS sl ON sl.song_id = s.id WHERE EXISTS ( SELECT sl2.* FROM roster_songs_locations AS sl2 WHERE s.id != sl2.song_id ) OR ( sl.location_id = '88fb5f94-aaa6-102c-a4fa-1f05bca0eec6' OR sl.location_id = '930555b0-a251-102c-a245-1559817ce81a' ) GROUP BY s.id The query almost works except it pulls out of the database songs that are assigned to sl.location_id's that aren't specified in the above query. I think it has something to do with my EXISTS code picking them up... Any ideas how I can get this to work?

    Read the article

  • Insert rownumber repeatedly in records in t-sql.

    - by jeff
    Hi, I want to insert a row number in a records like counting rows in a specific number of range. example output: RowNumber ID Name 1 20 a 2 21 b 3 22 c 1 23 d 2 24 e 3 25 f 1 26 g 2 27 h 3 28 i 1 29 j 2 30 k I rather to try using the rownumber() over (partition by order by column name) but my real records are not containing columns that will count into 1-3 rownumber. I already try to loop each of record to insert a row count 1-3 but this loop affects the performance of the query. The query will use for the RDL report, that is why as much as possible the performance of the query must be good. any suggestions are welcome. Thanks

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • SQL for selecting only the last item from a versioned table

    - by Jeremy
    I have a table with the following structure: CREATE TABLE items ( id serial not null, title character varying(255), version_id integer DEFAULT 1, parent_id integer, CONSTRAINT items_pkey PRIMARY KEY (id) ) So the table contains rows that looks as so: id: 1, title: "Version 1", version_id: 1, parent_id: 1 id: 2, title: "Version 2", version_id: 2, parent_id: 1 id: 3, title: "Completely different record", version_id: 1, parent_id: 3 This is the SQL code I've got for selecting out all of the records with the most recent version ids: select * from items inner join ( select parent_id, max(version_id) as version_id from items group by parent_id) as vi on items.version_id = vi.version_id and items.parent_id = vi.parent_id Is there a way to write that SQL statement so that it doesn't use a subselect?

    Read the article

  • finding numbers of days between two date to make a dynamic columns

    - by Chandradyani
    Dear all, I have a select query that currently produces the following results: DoctorName Team 1 2 3 4 5 6 7 ... 31 Visited dr. As   A                             x    x ...      2 times dr. Sc   A                          x          ...      1 times dr. Gh   B                                  x ...      1 times dr. Nd   C                                     ... x    1 times Using the following query: DECLARE @startDate = '1/1/2010', @enddate = '1/31/2010' SELECT d.doctorname, t.teamname, MAX(CASE WHEN ca.visitdate = 1 THEN 'x' ELSE NULL END) AS 1, MAX(CASE WHEN ca.visitdate = 2 THEN 'x' ELSE NULL END) AS 2, MAX(CASE WHEN ca.visitdate = 3 THEN 'x' ELSE NULL END) AS 3, ... MAX(CASE WHEN ca.visitdate = 31 THEN 'x' ELSE NULL END) AS 31, COUNT(*) AS visited FROM CACTIVITY ca JOIN DOCTOR d ON d.id = ca.doctorid JOIN TEAM t ON t.id = ca.teamid WHERE ca.visitdate BETWEEN @startdate AND @enddate GROUP BY d.doctorname, t.teamname the problem is I want to make the column of date are dynamic for example if ca.visitdate BETWEEN '2/1/2012' AND '2/29/2012' so the result will be : DoctorName Team 1 2 3 4 5 6 7 ... 29 Visited dr. As   A                             x    x ...      2 times dr. Sc   A                          x          ...      1 times dr. Gh   B                                  x ...      1 times dr. Nd   C                                     ... x    1 times Can somebody help me how to get numbers of days between two date and help me revised the query so it can looping MAX(CASE WHEN ca.visitdate = 1 THEN 'x' ELSE NULL END) AS 1 as many as numbers of days? Please please

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • Good way to edit the previous defined class in ipython

    - by leo
    Hi, I am wondering a good way to follow if i would like to redefine the members of a previous defined class in ipython. say : I have defined a class intro like below, and later i want to redefine part of the function definition _print_api. Any way to do that without retyping it . class intro(object): def _print_api(self,obj): def _print(key): if key.startswith('_'): return '' value = getattr(obj,key) if not hasattr(value,im_func): doc = type(valuee).__name__ else: if value.__doc__ is None: doc = 'no docstring' else: doc = value.__doc__ return ' %s :%s' %(key,doc) res = [_print(element) for element in dir(obj)] return '\n'.join([element for element in res if element != '']) def __get__(self,instance,klass): if instance is not None: return self._print(instance) else: return self._print_api(klass)

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • Can't figure out how to list all the people that don't live in same City as...

    - by AspOnMyNet
    I’d like to list all the people ( Person table ) that don’t live in same city as those cities listed in Location table. Thus, if Location table holds a record with City=’New York’ and State=’Moon’, but Person table holds a record with FirstName=’Someone’, City=’New York’ and Location=’Mars’, then Someone is listed in the resulting set, since she lives in New York located on Mars and not New York located on Moon, thus we’re talking about different cities with the same name. I tried solving it with the following query, but results are wrong: SELECT Person.FirstName, Person.LastName, Person.City, Person.State FROM Person INNER JOIN Location ON (Person.City <> Location.City AND Person.State = Location.State) OR (Person.City = Location.City AND Person.State <> Location.State) OR (Person.City <> Location.City AND Person.State <> Location.State) ORDER BY Person.LastName; Any ideas?

    Read the article

  • Chain LINQ IQueryable, and end with Stored Procedure

    - by Alex
    I'm chaining search criteria in my application through IQueryable extension methods, e.g.: public static IQueryable<Fish> AtAge (this IQueryable<Fish> fish, Int32 age) { return fish.Where(f => f.Age == age); } However, I also have a full text search stored procedure: CREATE PROCEDURE [dbo].[Fishes_FullTextSearch] @searchtext nvarchar(4000), @limitcount int AS SELECT Fishes.* FROM Fishes INNER JOIN CONTAINSTABLE(Fishes, *, @searchtext, @limitcount) AS KEY_TBL ON Fishes.Id = KEY_TBL.[KEY] ORDER BY KEY_TBL.[Rank] The stored procedure obviously doesn't return IQueryable, however, is it possible to somehow limit the result set for the stored procedure using IQueryable's? I'm envisioning something like .AtAge(5).AboveWeight(100).Fishes_FulltextSearch("abc"). In this case, the fulltext search should execute on a smaller subset of my Fishes table (narrowed by Age and Weight). Is something like this possible? Sample code?

    Read the article

  • Databases design - one link table or multiple link tables?

    - by David
    Hi there, I'm working on a front end for a database where each table essentially has a many to many relationship with all other tables. I'm not a DB admin, just a few basic DB courses. The typical solution in this case, as I understand it, would be multiple link tables to join each 'real' table. Here's what I'm proposing instead: one link table that has foreign key dependencies to all other PKs of the other tables. Is there any reason this could turn out badly in terms of scalability, flexibility, etc down the road?

    Read the article

  • mySQL one-to-many query

    - by Stomped
    I've got 3 tables that are something like this (simplified here ofc): users user_id user_name info info_id user_id rate contacts contact_id user_id contact_data users has a one-to-one relationship with info, although info doesn't always have a related entry. users has a one-to-many relationship with contacts, although contacts doesn't always have related entries. I know I can grab the proper 'users' + 'info' with a left join, is there a way to get all the data I want at once? For example, one returned record might be: user_id: 5 user_name: tom info_id: 1 rate: 25.00 contact_id: 7 contact_data: 555-1212 contact_id: 8 contact_data: 555-1315 contact_id: 9 contact_data: 555-5511 Is this possible with a single query? Or must I use multiple?

    Read the article

  • Fragment caching

    - by red5
    I would like to fragment cache part of a page. On the view I have <% cache("saved_area") do %> . <% end -%> In the controller: def index read_fragment("saved_area") end In config/production: config.cache_store = :file_store, File.join(RAILS_ROOT, 'tmp', 'cache') The file was created in the tmp/cache directory. But I am not sure if the cache is being used in the request, since I presume there should be a line in the log stating that the cache is being used (and there is not).

    Read the article

  • How to add if statement in SQL ?

    - by shin
    I have a following MySQL with two parameters, $catname and $limit=1. And it is working fine. SELECT P.*, C.Name AS CatName FROM omc_product AS P LEFT JOIN omc_category AS C ON C.id = P.category_id WHERE C.Name = '$catname' AND p.status = 'active' ORDER BY RAND() LIMIT 0, $limit Now I want to add another parameter $order. $order can be either ODER BY RAND() or ORDER BY product_order in the table omc_product. Could anyone tell me how to write this query please? Thanks in advance.

    Read the article

  • Is possible to auto-import a module from a diferent subfolder in other subfolder?

    - by mamcx
    I have a kind of plugin system, with this layout: - Python -- SDK -- Plugins ---- Plugin1 ---- Plugin2 All 3 have a __init__.py file. I wonder if is possible to be able to do import SDK from any plugin (as if SDK was in the site-packages folder). I'm in a situation where need to deploy, update, delete, add or change SDK files or any of the plugins under non-admin accounts, and wonder if I can get SDK in a clean way (I could sys.path.append in all plugins but I wonder if exist a better option). I imagine that using this in the Plugins init coulkd work: import sys import os ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__),'..')) print ROOT_DIR sys.path.append( ROOT_DIR ) But clearly is not executed this code (I imagine init was auto-magicalled executed in the load of the module :( )

    Read the article

  • Custom validation works in development but not in unit test

    - by Geolev
    I want to validate that at least one of two columns have a value in my model. I found somewhere on the web that I could create a custom validator as follows: # Check for the presence of one or another field: # :validates_presence_of_at_least_one_field :last_name, :company_name - would require either last_name or company_name to be filled in # also works with arrays # :validates_presence_of_at_least_one_field :email, [:name, :address, :city, :state] - would require email or a mailing type address module ActiveRecord module Validations module ClassMethods def validates_presence_of_at_least_one_field(*attr_names) msg = attr_names.collect {|a| a.is_a?(Array) ? " ( #{a.join(", ")} ) " : a.to_s}.join(", ") + "can't all be blank. At least one field must be filled in." configuration = { :on => :save, :message => msg } configuration.update(attr_names.extract_options!) send(validation_method(configuration[:on]), configuration) do |record| found = false attr_names.each do |a| a = [a] unless a.is_a?(Array) found = true a.each do |attr| value = record.respond_to?(attr.to_s) ? record.send(attr.to_s) : record[attr.to_s] found = !value.blank? end break if found end record.errors.add_to_base(configuration[:message]) unless found end end end end end I put this in a file called lib/acs_validator.rb in my project and added "require 'acs_validator'" to my environment.rb. This does exactly what I want. It works perfectly when I manually test it in the development environment but when I write a unit test it breaks my test environment. This is my unit test: require 'test_helper' class CustomerTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "customer not valid" do puts "customer not valid" customer = Customer.new assert !customer.valid? assert customer.errors.invalid?(:subdomain) assert_equal "Company Name and Last Name can't both be blank.", customer.errors.on(:contact_lname) end end This is my model: class Customer < ActiveRecord::Base validates_presence_of :subdomain validates_presence_of_at_least_one_field :customer_company_name, :contact_lname, :message => "Company Name and Last Name can't both be blank." has_one :service_plan end When I run the unit test, I get the following error: DEPRECATION WARNING: Rake tasks in vendor/plugins/admin_data/tasks, vendor/plugins/admin_data/tasks, and vendor/plugins/admin_data/tasks are deprecated. Use lib/tasks instead. (called from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) Couldn't drop acs_test : #<ActiveRecord::StatementInvalid: PGError: ERROR: database "acs_test" is being accessed by other users DETAIL: There are 1 other session(s) using the database. : DROP DATABASE IF EXISTS "acs_test"> acs_test already exists NOTICE: CREATE TABLE will create implicit sequence "customers_id_seq" for serial column "customers.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "customers_pkey" for table "customers" NOTICE: CREATE TABLE will create implicit sequence "service_plans_id_seq" for serial column "service_plans.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "service_plans_pkey" for table "service_plans" /usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/customer_test.rb" "test/unit/service_plan_test.rb" "test/unit/helpers/dashboard_helper_test.rb" "test/unit/helpers/customers_helper_test.rb" "test/unit/helpers/service_plans_helper_test.rb" /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1994:in `method_missing_without_paginate': undefined method `validates_presence_of_at_least_one_field' for #<Class:0xb7076bd0> (NoMethodError) from /usr/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /home/george/projects/advancedcomfortcs/app/models/customer.rb:3 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:158:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:224:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /home/george/projects/advancedcomfortcs/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 from ./test/unit/customer_test.rb:1:in `require' from ./test/unit/customer_test.rb:1 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `load' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `each' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ru...] (See full trace by running task with --trace) It seems to have stepped on will_paginate somehow. Does anyone have any suggestions? Is there another way to do the validation I'm attempting to do? Thanks, George

    Read the article

  • Writing a blocking wrapper around twisted's IRC client

    - by Andrey Fedorov
    I'm trying to write a dead-simple interface for an IRC library, like so: import simpleirc connection = simpleirc.Connect('irc.freenode.net', 6667) channel = connection.join('foo') find_command = re.compile(r'google ([a-z]+)').findall for msg in channel: for t in find_command(msg): channel.say("http://google.com/search?q=%s" % t) Working from their example, I'm running into trouble (code is a bit lengthy, so I pasted it here). Since the call to channel.__next__ needs to be returned when the callback <IRCClient instance>.privmsg is called, there doesn't seem to be a clean option. Using exceptions or threads seems like the wrong thing here, is there a simpler (blocking?) way of using twisted that would make this possible?

    Read the article

  • accessing files after setup.py install

    - by Matthew
    I'm developing a python application and have a question regarding coding it so that it still works after an user has installed it on his or her machine via setup.py install or similar. In one of my files, I use the following: file = "TestParser/View/MainWindow.ui" cwd = os.getcwd() argv_path = os.path.dirname(sys.argv[0]) file_path = os.path.join(cwd, argv_path, file) in order to get the path to MainWindow.ui, when I only know the path relative to the main script's location. This works regardless of from where I call the main script. The issue is that after an user installs the application on his or her machine, the relative path is different, so this doesn't work. I could use __file__, but according to this, py2exe doesn't have __file__. Is there a standard way of achieving this? Or a better way?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >