Search Results

Search found 39984 results on 1600 pages for 'null test'.

Page 16/1600 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Is there a way to undo Mocha stubbing of any_instance in Test::Unit

    - by Craig Walker
    Much like this question, I too am using Ryan Bates's nifty_scaffold. It has the desirable aspect of using Mocha's any_instance method to force an "invalid" state in model objects buried behind the controller. Unlike the question I linked to, I'm not using RSpec, but Test::Unit. That means that the two RSpec-centric solutions there won't work for me. Is there a general (ie: works with Test::Unit) way to remove the any_instance stubbing? I believe that it's causing a bug in my tests, and I'd like to verify that.

    Read the article

  • at what point in the test cycle are the test results written to a file?

    - by jcollum
    I'd like to take the entirety of the test results and publish it to a database. Got the database, got the table, got the script to publish it. Question is, at what point in the ms-test cycle would the results be fully written to the file? And how can I get the path to that file? I'd especially like to grab the "TextMessages" node and put it into my database. I assumed AssemblyCleanup, but the TestContext doesn't seem to be available then.

    Read the article

  • "rake test" doesn't load fixtures?

    - by Pavel K.
    when i run rake test --trace here's what happens ** Invoke test (first_time) ** Execute test ** Invoke test:units (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:abort_if_pending_migrations ** Execute db:test:prepare ** Invoke db:test:load (first_time) ** Invoke db:test:purge (first_time) ** Invoke environment ** Execute db:test:purge ** Execute db:test:load ** Invoke db:schema:load (first_time) ** Invoke environment ** Execute db:schema:load ** Execute test:units /usr/bin/ruby1.8 -I"lib:test".... (and after that fails because there's no fixtures loaded) why doesn't it load fixtures (i thought that would be default behaviour) and how do i make it load fixtures before executing tests??? p.s. my test/test_helper.rb content is: ENV["RAILS_ENV"] = "test" require File.expand_path(File.dirname(__FILE__) + "/../config/environment") require 'test_help' class ActiveSupport::TestCase self.use_transactional_fixtures = true self.use_instantiated_fixtures = false fixtures :all end (rails 2.3.4)

    Read the article

  • How can I use that?

    - by user289220
    test test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test testtest test test

    Read the article

  • Best way to check for null values in Java?

    - by Arty-fishL
    I need to check whether the function of an object returns true or false in Java, but that object may be null, so obviously then the function would throw a NullPointerException. This means I need to check if the object is null before checking the value of the function. What is the best way to go about this? I've listed some methods I considered, I just want to know the most sensible one, the one that is best programming practice for Java (opinion?). // method 1 if (foo != null) { if (foo.bar()) { etc... } } // method 2 if (foo != null ? foo.bar() : false) { etc... } // method 3 try { if (foo.bar()) { etc... } } catch (NullPointerException e) { } // method 4 // would this work all the time, would it still call foo.bar()? if (foo != null && foo.bar()) { etc... }

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • vim does not preserve symlink over sshfs

    - by HighCommander4
    I'm having some trouble with symlinks and sshfs. I use the '-o follow_symlinks' option to follow symlinks on the server side, but whenever I edit a symlinked file on the client side with vim, a copy of it is made on the server side, i.e. it's no longer a symlink. Set up a symlink on the server side: me@machine1:~$ echo foo > test.txt me@machine1:~$ mkdir test me@machine1:~$ cd test me@machine1:~/test$ ln -s ../test.txt test.txt me@machine1:~/test$ ls -al test.txt lrwxrwxrwx 1 me me 11 Jan 5 21:13 test.txt -> ../test.txt me@machine1:~/test$ cat test.txt foo me@machine1:~/test$ cat ../test.txt foo So far so good. Now: me@machine2:~$ mkdir test me@machine2:~$ sshfs me@machine1:test test -o follow_symlinks me@machine2:~$ cd test me@machine2:~/test$ vim test.txt [in vim, add a new line "bar" to the file] me@machine2:~/test$ cat test.txt foo bar Now observe what this does to the file on the server side: me@machine1:~/test$ ls -al test.txt -rw-r--r-- 1 me me 19 Jan 5 21:24 test.txt me@machine1:~/test$ cat test.txt foo bar me@machine1:~/test$ cat ../test.txt foo As you can see, it made a copy and only edited the copy. How can I get it to work so it actually follows the symlink when editing the file?

    Read the article

  • How to list all duplicated rows which may include NULL columns?

    - by Yousui
    Hi guys, I have a problem of listing duplicated rows that include NULL columns. Lemme show my problem first. USE [tempdb]; GO IF OBJECT_ID(N'dbo.t') IS NOT NULL BEGIN DROP TABLE dbo.t END GO CREATE TABLE dbo.t ( a NVARCHAR(8), b NVARCHAR(8) ); GO INSERT t VALUES ('a', 'b'); INSERT t VALUES ('a', 'b'); INSERT t VALUES ('a', 'b'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('c', 'd'); INSERT t VALUES ('e', NULL); INSERT t VALUES (NULL, NULL); INSERT t VALUES (NULL, NULL); INSERT t VALUES (NULL, NULL); INSERT t VALUES (NULL, NULL); GO Now I want to show all rows that have other rows duplicated with them, I use the following query. SELECT a, b FROM dbo.t GROUP BY a, b HAVING count(*) > 1 which will give us the result: a b -------- -------- NULL NULL a b c d Now if I want to list all rows that make contribution to duplication, I use this query: WITH duplicate (a, b) AS ( SELECT a, b FROM dbo.t GROUP BY a, b HAVING count(*) > 1 ) SELECT dbo.t.a, dbo.t.b FROM dbo.t INNER JOIN duplicate ON (dbo.t.a = duplicate.a AND dbo.t.b = duplicate.b) Which will give me the result: a b -------- -------- a b a b a b c d c d c d c d As you can see, all rows include NULLs are filtered. The reason I thought is that I use equal sign to test the condition(dbo.t.a = duplicate.a AND dbo.t.b = duplicate.b), and NULLs cannot be compared use equal sign. So, in order to include rows that include NULLs in it in the last result, I have change the aforementioned query to WITH duplicate (a, b) AS ( SELECT a, b FROM dbo.t GROUP BY a, b HAVING count(*) > 1 ) SELECT dbo.t.a, dbo.t.b FROM dbo.t INNER JOIN duplicate ON (dbo.t.a = duplicate.a AND dbo.t.b = duplicate.b) OR (dbo.t.a IS NULL AND duplicate.a IS NULL AND dbo.t.b = duplicate.b) OR (dbo.t.b IS NULL AND duplicate.b IS NULL AND dbo.t.a = duplicate.a) OR (dbo.t.a IS NULL AND duplicate.a IS NULL AND dbo.t.b IS NULL AND duplicate.b IS NULL) And this query will give me the answer as I wanted: a b -------- -------- NULL NULL NULL NULL NULL NULL NULL NULL a b a b a b c d c d c d c d Now my question is, as you can see, this query just include two columns, in order to include NULLs in the last result, you have to use many condition testing statements in the query. As the column number increasing, the condition testing statements you need in your query is increasing astonishingly. How can I solve this problem? Great thanks.

    Read the article

  • Eclipse JUnit Plugin Test very slow to re-execute Test Suite on Windows

    - by soundasleepful
    I'm having an odd, and stressing, problem with running a large JUnit Plugin test suite in Eclipse. When I try to re-run a JUnit plugin suite that has just been executed, Eclipse hangs for quite some time before it eventually wakes up and launches. It can take up to 5 minutes sometimes, and increases with the size of the suite. Visually, it appears as a GC cleanup, except that I have plenty of GC space available (400 MB freely allocated). The size of the workspace that is has to delete is well under 1 GB, and there are not too many files - definitely less than 20,000. While I was waiting for a new run to start, I decided to manually kill explorer.exe to see if it had any effect. Surprisingly, Eclipse instantly fell out of its freeze and ran as normal. This makes me think that Windows is somehow interfering with the deletion of these workspace files. They're not being put into the Recycle Bin though. The workspace is in C: which I think is out of the range of any workspace/domain stuff. Any ideas?

    Read the article

  • How to feature-detect/test for specific jQuery (and Javascript) methods/functions used

    - by Zildjoms
    good day everyone, hope yer all doin awesome am very new to javascript and jquery, and i (think) i have come up with a simple fade-in/out implementation on a site am workin on (check out http://www.s5ent.com/expandjs.html - if you have the time to check it for inefficiency or what that'd be real sweet). i use the following functions/methods/collections and i would like to do a feature test before using them. uhm.. how? or is there a better way to go about this? jQuery $ .fadeIn([duration]) .fadeOut([duration]) .attr(attributeName,value) .append(content) .each(function(index,Element)) .css(propertyName,value) .hover(handlerIn(eventObject),handlerOut(eventObject)) .stop([clearQueue],[jumpToEnd]) .parent() .eq(index) JavaScript setInterval(expression,timeout) clearInterval(timeoutId) setTimeout(expression,timeout) clearTimeout(timeoutId) i tried looking into jquery.support for the jquery ones, but i find myself running into conceptual problems with it, i.e. for fadein/fadeout, i (think i) should test for $.support.opacity, but that would be false in ie whereas ie6+ could still fairly render the fades. also am using jquery 1.2.6 coz that's enough for what i need. the support object is in 1.3. so i'm hoping to avoid dragging-in more unnecessary code if i can. i also worked with browser sniffing, no matter how frowned-upon. but that's also a bigger problem for me because of non-standard ua strings and spoofing and everything else am not aware of. so how do you guys think i should go about this? or should i even? is there a better way to go about making sure that i don't run code that'll eventually break the page? i've set it up to degrade into a css hover when javascript ain't there.. expertise needed. much appreciated, thanks guyz!

    Read the article

  • rake test not copying development postgres db with sequences

    - by Robert Crida
    I am trying to develop a rails application on postgresql using a sequence to increment a field instead of a default ruby approach based on validates_uniqueness_of. This has proved challenging for a number of reasons: 1. This is a migration of an existing table, not a new table or column 2. Using parameter :default = "nextval('seq')" didn't work because it tries to set it in parenthesis 3. Eventually got migration working in 2 steps: change_column :work_commencement_orders, :wco_number_suffix, :integer, :null => false#, :options => "set default nextval('wco_number_suffix_seq')" execute %{ ALTER TABLE work_commencement_orders ALTER COLUMN wco_number_suffix SET DEFAULT nextval('wco_number_suffix_seq'); } Now this would appear to have done the correct thing in the development database and the schema looks like: wco_number_suffix | integer | not null default nextval('wco_number_suffix_seq'::regclass) However, the tests are failing with PGError: ERROR: null value in column "wco_number_suffix" violates not-null constraint : INSERT INTO "work_commencement_orders" ("expense_account_id", "created_at", "process_id", "vo2_issued_on", "wco_template", "updated_at", "notes", "process_type", "vo_number", "vo_issued_on", "vo2_number", "wco_type_id", "created_by", "contractor_id", "old_wco_type", "master_wco_number", "deadline", "updated_by", "detail", "elective_id", "authorization_batch_id", "delivery_lat", "delivery_long", "operational", "state", "issued_on", "delivery_detail") VALUES(226, '2010-05-31 07:02:16.764215', 728, NULL, E'Default', '2010-05-31 07:02:16.764215', NULL, E'Procurement::Process', NULL, NULL, NULL, 226, NULL, 276, NULL, E'MWCO-213', '2010-06-14 07:02:16.756952', NULL, E'Name 4597', 220, NULL, NULL, NULL, 'f', E'pending', NULL, E'728 Test Road; Test Town; 1234; Test Land') RETURNING "id" The explanation can be found when you inspect the schema of the test database: wco_number_suffix | integer | not null So what happened to the default? I tried adding task: template: smmt_ops_development to the database.yml file which has the effect of issuing create database smmt_ops_test template = "smmt_ops_development" encoding = 'utf8' I have verified that if I issue this then it does in fact copy the default nextval. So clearly rails is doing something after that to suppress it again. Any suggestions as to how to fix this? Thanks Robert

    Read the article

  • How to order by column with non-null values first in sql

    - by devlife
    I need to write a sql statement to select all users ordered by lastname, firstname. This is the part I know how to do :) What I don't know how to do is to order by non-null values first. Right now I get this: null, null null, null p1Last, p1First p2Last, p2First etc I need to get: p1Last, p1First p2Last, p2First null, null null, null Any thoughts?

    Read the article

  • Problem in Saving Multi Level Models in YII

    - by Waqar
    My Table structure for user and his adress detail is as follows CREATE TABLE tbl_users ( id bigint(20) unsigned NOT NULL AUTO_INCREMENT, loginname varchar(128) NOT NULL, enabled enum("True","False"), approved enum("True","False"), password varchar(128) NOT NULL, email varchar(128) NOT NULL, role_id int(20) NOT NULL DEFAULT '2', name varchar(70) NOT NULL, co_type enum("S/O","D/O","W/O") DEFAULT "S/O", co_name varchar(70), gender enum("MALE","FEMALE","OTHER") DEFAULT "MALE", dob date DEFAULT NULL, maritalstatus enum("SINGLE","MARRIED","DIVORCED","WIDOWER") DEFAULT "MARRIED", occupation varchar(100) DEFAULT NULL, occupationtype_id int(20) DEFAULT NULL, occupationindustry_id int(20) DEFAULT NULL, contact_id bigint(20) unsigned DEFAULT NULL, signupreason varchar(500), PRIMARY KEY (id), UNIQUE KEY loginname (loginname), UNIQUE KEY email (email), FOREIGN KEY (role_id) REFERENCES tbl_roles (id), FOREIGN KEY (occupationtype_id) REFERENCES tbl_occupationtypes (id), FOREIGN KEY (occupationindustry_id) REFERENCES tbl_occupationindustries (id), FOREIGN KEY (contact_id) REFERENCES tbl_contacts (id) ) ENGINE=InnoDB; CREATE TABLE tbl_contacts ( id bigint(20) unsigned NOT NULL AUTO_INCREMENT, contact_type enum("cres","pres","coff"), address varchar(300) DEFAULT NULL, landmark varchar(100) DEFAULT NULL, district_id int(11) DEFAULT NULL, city_id int(20) DEFAULT NULL, state_id int(20) DEFAULT NULL, pin_id bigint(20) unsigned DEFAULT NULL, area_id bigint(20) unsigned DEFAULT NULL, po_id bigint(20) unsigned DEFAULT NULL, phone1 varchar(20) DEFAULT NULL, phone2 varchar(20) DEFAULT NULL, mobile1 varchar(20) DEFAULT NULL, mobile2 varchar(20) DEFAULT NULL, PRIMARY KEY (id), FOREIGN KEY (district_id) REFERENCES tbl_districts (id), FOREIGN KEY (city_id) REFERENCES tbl_cities (id), FOREIGN KEY (state_id) REFERENCES tbl_states (id) ) ENGINE=InnoDB; CREATE TABLE tbl_states ( id int(20) NOT NULL AUTO_INCREMENT, name varchar(70) DEFAULT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; CREATE TABLE tbl_districts ( id int(20) NOT NULL AUTO_INCREMENT, name varchar(70) DEFAULT NULL, state_id int(20) DEFAULT NULL, PRIMARY KEY (id), FOREIGN KEY (state_id) REFERENCES tbl_states (id) ) ENGINE=InnoDB; CREATE TABLE tbl_cities ( id int(20) NOT NULL AUTO_INCREMENT, name varchar(70) DEFAULT NULL, district_id int(20) DEFAULT NULL, state_id int(20) DEFAULT NULL, PRIMARY KEY (id), FOREIGN KEY (district_id) REFERENCES tbl_districts (id), FOREIGN KEY (state_id) REFERENCES tbl_states (id) ) ENGINE=InnoDB; The relationship is as follows User has multiple contacts i.e Permanent Address, Current Address, Office Address. Each Contact has state and City. User-Contact-state like this How to save models of this structure in one go. Please provide a reply ASAP

    Read the article

  • executing null values records

    - by jjj
    i am trying to execute the records that have TotalTime null value from the table NewTimeAttendance...TotalTime datatype nchar(10) select * from newtimeattendance where TotalTime = 'NULL' ....nothing select * from newtimeattendance where TotalTime = 'null' ....nothing select * from newtimeattendance where TotalTime = 'Null' ....nothing select * from newtimeattendance where TotalTime = null ....nothing select * from newtimeattendance where TotalTime = Null ....nothing select * from newtimeattendance where TotalTime = NULL ....nothing when i select the whole table i can see that there is some NULL TotalTime values..!! it is small select statment ..why doesn't it work ? is there a way (special way ) to execute the 'NULL' with nchar type ?! thanks in advance

    Read the article

  • How fast are my services? Comparing basicHttpBinding and ws2007HttpBinding using the SO-Aware Test Workbench

    - by gsusx
    When working on real world WCF solutions, we become pretty aware of the performance implications of the binding and behavior configuration of WCF services. However, whether it’s a known fact the different binding and behavior configurations have direct reflections on the performance of WCF services, developers often struggle to figure out the real performance behavior of the services. We can attribute this to the lack of tools for correctly testing the performance characteristics of WCF services...(read more)

    Read the article

  • MiniMax function throws null pointer exception

    - by Sven
    I'm working on a school project, I have to build a tic tac toe game with the AI based on the MiniMax algorithm. The two player mode works like it should. I followed the code example on http://ethangunderson.com/blog/minimax-algorithm-in-c/. The only thing is that I get a NullPointer Exception when I run the code. And I can't wrap my finger around it. I placed a comment in the code where the exception is thrown. The recursive call is returning a null pointer, what is very strange because it can't.. When I place a breakpoint on the null return with the help of a if statement, then I see that there ARE still 2 to 3 empty places.. I probably overlooking something. Hope someone can tell me what I'm doing wrong. Here is the MiniMax code (the tic tac toe code is not important): /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package MiniMax; import Game.Block; import Game.Board; import java.util.ArrayList; public class MiniMax { public static Place getBestMove(Board gameBoard, Block.TYPE player) { Place bestPlace = null; ArrayList<Place> emptyPlaces = gameBoard.getEmptyPlaces(); Board newBoard; //loop trough all the empty places for(Place emptyPlace : emptyPlaces) { newBoard = gameBoard.clone(); newBoard.setBlock(emptyPlace.getRow(), emptyPlace.getCell(), player); //no game won and still room to move if(newBoard.getWinner() == Block.TYPE.NONE && newBoard.getEmptyPlaces().size() > 0) { //is an node (has children) Place tempPlace = getBestMove(newBoard, invertPlayer(player)); //ERROR is thrown here! tempPlace is null. emptyPlace.setScore(tempPlace.getScore()); } else { //is an leaf if(newBoard.getWinner() == Block.TYPE.NONE) { emptyPlace.setScore(0); } else if(newBoard.getWinner() == Block.TYPE.X) { emptyPlace.setScore(-1); } else if(newBoard.getWinner() == Block.TYPE.O) { emptyPlace.setScore(1); } //if this move is better then our prev move, take it! if((bestPlace == null) || (player == Block.TYPE.X && emptyPlace.getScore() < bestPlace.getScore()) || (player == Block.TYPE.O && emptyPlace.getScore() > bestPlace.getScore())) { bestPlace = emptyPlace; } } } //This should never be null, but it does.. return bestPlace; } private static Block.TYPE invertPlayer(Block.TYPE player) { if(player == Block.TYPE.X) { return Block.TYPE.O; } return Block.TYPE.X; } }

    Read the article

  • SQL: empty string vs NULL value

    - by Jacek Prucia
    I know this subject is a bit controversial and there are a lot of various articles/opinions floating around the internet. Unfortunatelly, most of them assume the person doesn't know what the difference between NULL and empty string is. So they tell stories about surprising results with joins/aggregates and generally do a bit more advanced SQL lessons. By doing this, they absolutely miss the whole point and are therefore useless for me. So hopefully this question and all answers will move subject a bit forward. Let's suppose I have a table with personal information (name, birth, etc) where one of the columns is an email address with varchar type. We assume that for some reason some people might not want to provide an email address. When inserting such data (without email) into the table, there are two available choices: set cell to NULL or set it to empty string (''). Let's assume that I'm aware of all the technical implications of choosing one solution over another and I can create correct SQL queries for either scenario. The problem is even when both values differ on the technical level, they are exactly the same on logical level. After looking at NULL and '' I came to a single conclusion: I don't know email address of the guy. Also no matter how hard i tried, I was not able to sent an e-mail using either NULL or empty string, so apparently most SMTP servers out there agree with my logic. So i tend to use NULL where i don't know the value and consider empty string a bad thing. After some intense discussions with colleagues i came with two questions: am I right in assuming that using empty string for an unknown value is causing a database to "lie" about the facts? To be more precise: using SQL's idea of what is value and what is not, I might come to conclusion: we have e-mail address, just by finding out it is not null. But then later on, when trying to send e-mail I'll come to contradictory conclusion: no, we don't have e-mail address, that @!#$ Database must have been lying! Is there any logical scenario in which an empty string '' could be such a good carrier of important information (besides value and no value), which would be troublesome/inefficient to store by any other way (like additional column). I've seen many posts claiming that sometimes it's good to use empty string along with real values and NULLs, but so far haven't seen a scenario that would be logical (in terms of SQL/DB design). P.S. Some people will be tempted to answer, that it is just a matter of personal taste. I don't agree. To me it is a design decision with important consequences. So i'd like to see answers where opion about this is backed by some logical and/or technical reasons.

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • RasterizerState set to null after calling DrawText in Nuclex

    - by ProgrammerAtWork
    I have the following code in XNA: // class members Text t1; Text t2; Text t3; // init // Debugfont is size 24 vectorfont t1 = MM.DebugFont24.Fill("hello"); t1 = MM.DebugFont24.Extrude("hello"); t2 = MM.DebugFont24.Fill("hello"); t2 = MM.DebugFont24.Extrude("hello"); t3 = MM.DebugFont24.Fill("hello"); t3 = MM.DebugFont24.Extrude("hello"); // Draw TextBatch test = new TextBatch(MM.GD); test.DrawText(t1, Color.Red); test.DrawText(t2, Color.Red); test.DrawText(t3, Color.Red); test.End(); //After the second call to the TextBatch, RasterizerState of the GraphicsDevice is set to null //But I don't get any runtime errors or any indication of that something is wrong. Is this supposed to happen? Or am I doing something wrong? I've discovered that this happened because culling was set to None when I was rendering textures

    Read the article

  • Microsoft Test Manager from Visual Studio 2010 Ultimate

    - by LeeHull
    Anyone know the OS requirements for this? The actual System Requirements mention Windows 7 but after spending the time installing and configuring and finally get it up and running, went to the Testing Center and I get this message. "The Test Management features are not supported on Windows Home Premium." I now have to upgrade my Windows but I want to be sure what version is required

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >