Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 57/258 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • How do Scala parser combinators compare to Haskell's Parsec?

    - by artif
    I have read that Haskell parser combinators (in Parsec) can parse context sensitive grammars. Is this also true for Scala parser combinators? If so, is this what the "into" (aka "") function is for? What are some strengths/weaknesses of Scala's implementation of parser combinators, vs Haskell's? Do they accept the same class of grammars? Is it easier to generate error messages or do other miscellaneous useful things with one or the other? How does packrat parsing (introduced in Scala 2.8) fit into this picture? Is there a webpage or some other resource that shows how different operators/functions/DSL-sugar from one language's implementation maps onto the other's?

    Read the article

  • Free tools/libraries to compare tables with filtering for SQL Server 2008 and visualize/sync diffs t

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • Microsoft Sync Framework - Local DB and Remote DB have to have the same schema?

    - by Josh
    When using MSF, is it implied in the technology that the sync tables are supposed to be 1-1? The reason I'm wondering is that if I'm synching from a SQL2005 database to a SQLCE, I might want the CE one to be a little more flattened out so I can get data out with a simpler SELECT statement (as CE does not support sprocs). For example, I might have a tblCustomer, tblOrder, and tblCustomerOrder in the central database, but in the local databases one table with all the data might be preferred. Of course I'd still want the updates to reflect back and forth between the two databases. Does MSF make this possible, or does the local DB have to have the same tables as the central?

    Read the article

  • Is there an easy way to compare if 2 XDocuments are equal ignoring element/attribute order?

    - by Davy8
    Unit testing my serialization code I found one failed because I had attributes listed in a different order (I'm just comparing the XDocument.ToString() values) and while I could fix that, it really doesn't matter to me in what order the elements or attributes appear as long as they're all there with the right name at the right level of hierarchy. I could probably write a method do this, but I'm wondering if there's an easy built in way I'm not aware of.

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • What is the fastest way to compare two list of items?

    - by edude05
    I have two folders with approximately 10,000 files each. I'd like to write a script or program that can tell me if these folders are in sync and then tell me what files are missing from each to make them in sync. Therefore, after generating a list of files, what is the fastest algorithm to sort them for unique files? What I'm thinking right now is comparing the first file on each list then if they are different remove one until they are the same, then remove both from the list (because they are not unique.) Is there a faster algorithm then this?

    Read the article

  • When doing a Schema Export with hbm2ddl, is there a way to specify that you DO NOT want Nullable For

    - by Jon Erickson
    The DDL that is being created is putting all of my many to many associations into 1 table, but I actually want each many to many association in its' own table (for other reasons) Right now hbm2ddl is creating this table (only Table1Key OR Table2Key OR Table3Key should be filled out for any given record, causing this table to have nullable foreign keys): +-----------+ | xRef | +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | +-----------+ I want hbm2ddl to create the following 3 tables so that there are no nullable foreign keys. +-----------+ +-----------+ +-----------+ | xRef1 | | xRef2 | | xRef3 | +-----------+ +-----------+ +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | | RiskKey | | RiskKey | +-----------+ +-----------+ +-----------+

    Read the article

  • Hibernate HQL and Grails- How do I compare collections?

    - by BurtP
    Hi everyone (my first post!), I have an HQL question (in Groovy/Grails) I was hoping someone could help me with. I have a simple Asset object with a one-to-many Tags collection. class Asset { Set tags static hasMany = [tags:Tag] } class Tag { String name } What I'm trying to do in HQL: A user passes in some tags in params.tags (ex: groovy grails rocks) and wants to return Asset(s) that have those tags, and only those exact tags. Here's my HQL that returns Assets if one or more of the tags are present in an Assets tags: SELECT DISTINCT a FROM Asset a LEFT JOIN a.tags t WHERE t IN (:tags) assetList = Asset.executeQuery( hql, [tags:tokenizedTagListFromParams] The above code works perfect, but its really like an OR. If any of the tag(s) are found, it will return that Asset. I only want to return Assets that have those exact same tags (in number as well). Every time a new tag is created, I new Tag(name:xxx).save() so I can get the Tag instances and unique ID's for each tag that was asked for. I also tried converting the passed in tags to a list of Tag instances with Tag.findByName(t1) for each tag, and also a list of (unique) Tag Id's into the HQL above with no luck. I would appreciate any help/advice. Thank you for your time, Burt

    Read the article

  • Can Entity Framework be used for the purpose of entity/schema definition at application runtime?

    - by Kabeer
    Hello. Can 'Entity Framework' be used for the purpose of entity definition at application runtime? Ok, to make it simple, here is what I want to achieve: My application is a product. I should be able to define entities at runtime on the basis of inputs gathered from an 'authoring' user (in effect this means 'model first' approach). These entities are of course persistable. Further, after having defined the entities and their relationships, I should be able to make complex queries across them for many reasons, including reports. Is the above possible and how? So far what I have realized is that there is a dependency on Visual Studio.

    Read the article

  • How can I compare my PHPASS-hashed stored password to my incoming POST data?

    - by Ygam
    Here's a better example, just a simple checking..stored value in database has password: fafa (hashed with phpass in registration) and username: fafa; i am using the phpass password hashing framework public function demoHash($data) //$data is the post data named password { $hash =new PasswordHash(8, false); $query = ORM::factory('user'); $result = $query ->select('username, password') ->where('username', 'fafa') ->find(); $hashed = $hash->HashPassword($data); $check = $hash->CheckPassword($hashed, $result->password); echo $result->username . "<br/>"; echo $result->password . "<br/>"; return $check; } check is returning false

    Read the article

  • Using mercurial and beyond compare 3(bc3) as the diff tool? help needed

    - by mhd
    Hi, in windows I am able to use winmerge as the external diff tool for hg using mercurial.ini,etc. Using some options switch that you can find in web(I think it's a japanese website) Anyway, here for example: hg winmerge -r1 -r2 will list file(s) change(s) between rev1 and rev2 in winmerge. I can just click which file to diff but for bc3: hg bcomp -r1 -r2 will make bc3 open a dialog which stated that a temp dir can't be found. The most I can do using bc3 and hg is hg bcomp -r1 -r2 myfile.cpp which will open diff between rev1 and rev2 of myfile.cpp So,it seems that hg+bc3 can't successfully acknowledge of all files change between revision. Only able to diff 1 file at a time. Anyone can use bc3 + hg better ? edit: Problem Solved ! Got the solution from scooter support page. I have to use bcompare instead of bcomp Here's a snippet of my mercurial.ini [extensions] hgext.win32text = ;mhd adds hgext.extdiff = ;mhd adds for bc [extdiff] cmd.bc3 = bcompare opts.bc3 = /ro ;mhd adds for winmerge ;[extdiff] ;cmd.winmerge = WinMergeU ;opts.winmerge = /r /e /x /ub

    Read the article

  • Preventing Netbeans JAXB generation trashing classes

    - by Mac
    I'm developing a SOAP service using JAX-WS and JAXB under Netbeans 6.8, and getting a little frustrated with Netbeans trashing my work every time the XSD schema my JAXB bindings are based upon changes. To elaborate, the IDE automatically generates classes bound to the schema, which can then be (un)marshalled from/to XML using JAXB. To these classes I've added extra methods to (for example) convert to and from separate classes designed to be persisted to database with JPA. The problem is that whenever the schema changes and I rebuild, these classes are regenerated, and all my custom methods are deleted. I can manually replace them by copy-pasting from a backup file, but that is rather time-consuming and tedious. As I'm using an iterative design approach, the schema is changing rather frequently and I'm wasting an awful lot of time whenever it does, simply to reinstate my previous code. While the IDE automatically regenerating the JAXB-bound classes is entirely reasonable and I don't mean to imply otherwise, I was wondering if anyone had any bright ideas as to how to prevent my extra work having to be manually reinstated every time my schema changes?

    Read the article

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • How can I partial compare two strings in C?

    - by Nazgulled
    Hi, Let's say I have the following content: Lorem Ipsum is simply dummy text of the printing and typesetting industry. How do I search for dummy or dummy text in that string using C? Is there any easy way to do it or only with strong string manipulation? All I need is to search for it and return a boolean with the result.

    Read the article

  • How does Android emulator performance compare to real device performance?

    - by uj2
    I'm looking into writing an Android game, tough I don't curerntly own an Android device. For those of you who own a device, how does the performance on the emulator relate to real device performance? I'm especially interested in graphics related tasks. This obviously depends on both the machine running the emulator, and the specific device in question, but I'm talking rough numbers here. This question is a duplicate, but since that post is heavily outdated, I figured it's irrelevant by now.

    Read the article

  • Is it bad to explicitly compare against boolean constants e.g. if (b == false) in Java?

    - by polygenelubricants
    Is it bad to write: if (b == false) //... while (b != true) //... Is it always better to instead write: if (!b) //... while (!b) //... Presumably there is no difference in performance (or is there?), but how do you weigh the explicitness, the conciseness, the clarity, the readability, etc between the two? Note: the variable name b is just used as an example, ala foo and bar.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >