Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 104/398 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Questions on SQL Server 2008 Full-Text Search

    - by Eddie
    I have some questions about SQL 2K8 integrated full-text search. Say I have the following tables: Car with columns: id (int - pk), makeid (fk), description (nvarchar), year (int), features (int - bitwise value - 32 features only) CarMake with columns: id (int - pk), mfgname (nvarchar) CarFeatures with columns: id (int - 1, 2, 4, 8, etc.), featurename (nvarchar) If someone searches "red honda civic 2002 4 doors", how would I parse the input string so that I could also search in the "CarMake" and "CarFeatures" tables?

    Read the article

  • Compare 2 database

    - by shantanuo
    I have 2 identical databases. abc15 and abc18. But one of the database has one extra table and I need to find that. I thought the following query should return it, but is it not showing the record that I expect. select * from information_schema.tables as a left join information_schema.tables as b on a.TABLE_SCHEMA=b.TABLE_SCHEMA AND a.TABLE_NAME=b.TABLE_NAME where a.TABLE_SCHEMA = 'abc15' AND b.TABLE_SCHEMA='abc18' and b.TABLE_NAME IS NULL

    Read the article

  • PHP MySQL join table

    - by Jordan Pagaduan
    $sql = "SELECT logs.full_name, logout.status FROM logs, logout WHERE logs.employee_id = logout.employee_id"; tables -- logs logout I'm having error on this. I search join tables in google. And that's what I got. What is wrong with this code?

    Read the article

  • Accessing Visio Database Reverse engineering schemas through an addin

    - by zeocrash
    I'm trying to build a visio addin that allows me to get a list of tables when a database reverse engineer is run. I'm using the Microsoft.Office.Interop.Visio library. I can find a list of shapes on the page, but this gives me every single shape on the page and every single piece of text. i would like to have a list of just the database tables (their physical names) in the document.

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • Change the precision of all decimal columns in every table in the database

    - by Jon
    Hi all, I have a rather large database that has alot of decimal columns in alot of tables, the customer has now changed their mind and wants all the numbers (decimals) to have a precision of 3 d.p. instead of the original two. Is there any quick way of going through all the tables in a database and changing any decimal column in that table to have 3.d.p instead of 2 d.p? The db is on sql 2005. Any help would be great.

    Read the article

  • How to use OUTPUT statement inside a SQL trigger?

    - by Jeff Meatball Yang
    From MSDN: If a statement that includes an OUTPUT clause is used inside the body of a trigger, table aliases must be used to reference the trigger inserted and deleted tables to avoid duplicating column references with the INSERTED and DELETED tables associated with OUTPUT. Can someone give me an example of this? I'm not sure if this is correct: create trigger MyInterestingTrigger on TransactionalTable123 AFTER insert, update, delete AS declare @batchId table(batchId int) insert SomeBatch (batchDate, employeeId) output SomeBatch.INSERTED.batchId into @batchId insert HistoryTable select b.batchId, i.col1, i.col2, i.col3 from TransactionalTable123.inserted i cross join @batchId b

    Read the article

  • Using NHibernate to insert/update using a SQL server-side DEFAULT value

    - by Joseph Daigle
    Several of our database tables contain LastModifiedDate columns. We would like these to stay synchronized based on a single time-source. Our best time-source, in this case, is the SQL Server itself since there is only one database server but multiple application servers which could potentially be off sync. I would like to be able to use NHibernate, but have it use either GETUTCDATE() or DEFAULT for the column value when updating or inserting rows on these tables. Thoughts?

    Read the article

  • SQL - How to join on similar (not exact) columns

    - by BlueRaja
    I have two tables which get updated at almost the exact same time - I need to join on the datetime column. I've tried this: SELECT * FROM A, B WHERE ABS(DATEDIFF(second, A.Date_Time, B.Date_Time) = ( SELECT MIN(ABS(DATEDIFF(second, A.Date_Time, B2.Date_Time))) FROM B AS B2 ) But it tells me: Multiple columns are specified in an aggregated expression containing an outer reference. If an expression being aggregated contains an outer reference, then that outer reference must be the only column referenced in the expression. How can I join these tables?

    Read the article

  • Do a database query on Textbox onblur event

    - by user279521
    I am using asp.net 3.5 with C#. I need to do a database lookup when a user enters ProductID in txtProductID. I guess doing javascript is out of the question since this will have to be server side call. I wrote this code in the page_load event of the webpage: protected void Page_Load(object sender, EventArgs e) { txtProductID.Attributes.Add("onblur", "LookupProduct()"); } protected void LookupUser() { //Lookup Product information on onBlur event; } I get an error message: Microsoft JScript runtime error: Object expected How can I resolve this ?

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • Need help in displaying data insider marquee

    - by user59637
    Hi all, I want to display news inside the marquee markup in my banking application but its not happening.Please somebody help me what is the error in my code.Here is my code: <marquee bgcolor="silver" direction="left" id="marq1" runat="server" behavior="scroll" scrolldelay="80" style="height: 19px" width="565"> <% String se = Session["countnews"].ToString(); for (int i = 0; i < int.Parse("" +se); i++) { %> <strong><%Response.Write("&nbsp;&nbsp;" + Session["news"+i] + "&nbsp;&nbsp;"); %></strong> <% } %> </marquee> public class News { DataSet ds = new DataSet("Bank"); SqlConnection conn; String check; SqlDataAdapter sda; int i; public string News_Name; public int Count_News; public int newsticker() { conn = new SqlConnection(ConfigurationManager.ConnectionStrings["BankingTransaction"].ConnectionString.ToString()); check = "Select NewsTitle from News where NewsStatus = 'A'"; sda = new SqlDataAdapter(check, conn); sda.Fill(ds, "News"); if (ds.Tables[0].Rows.Count > 0) { for (i = 0; i < ds.Tables[0].Rows.Count; i++) { News_Name =i+ ds.Tables[0].Rows[i].ItemArray[0].ToString(); } Count_News = ds.Tables[0].Rows.Count; } else { News_Name =0+ "Welcome to WestSide Bank Online Web site!"; Count_News = 1; } return int.Parse(Count_News.ToString()); } protected void Page_Load(object sender, EventArgs e) { News obj = new News(); try { obj.newsticker(); Session["news"] = obj.News_Name.ToString(); Session["countnews"] = obj.Count_News.ToString(); } catch (SqlException ex) { Response.Write("Error in login" + ex.Message); Response.Redirect("Default.aspx"); } finally { obj = null; } }

    Read the article

  • hibernate for dynamic table creation

    - by user369316
    i AM A HIBERNATE BEGINNER ,Since i need to create dynamic tables with dynamic fields in them i chose to use hibernate . As far as my understanding , creating tables requires a class with the fields defined in the class . How do i generate the classes dynamically based on the table with the required fields ?

    Read the article

  • Correct way to make datasources/resources a deploy-time setting

    - by Draemon
    I have a web-app that requires two settings: A JDBC datasource A string token I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container. My code does this: InitialContext ctx = new InitialContext(); Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env"); token = (String)envCtx.lookup("token"); ds = (DataSource)envCtx.lookup("jdbc/datasource") Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml: ... <resource-ref> <res-ref-name>jdbc/datasource</res-ref-name> <jndi-name>jdbc/test-datasource</jndi-name> </resource-ref> ... but sun-web.xml goes inside my war, right? surely there must be a way to do this through the management interface Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development. EDIT Tomcat has a reasonable way to do this: Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with: <?xml version="1.0" encoding="UTF-8"?> <Context antiResourceLocking="false" privileged="true"> <!-- String resource --> <Environment name="token" value="value of token" type="java.lang.String" override="false" /> <!-- Linking to a global resource --> <ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" /> <!-- Derby --> <Resource name="jdbc/datasource2" type="javax.sql.DataSource" auth="Container" driverClassName="org.apache.derby.jdbc.EmbeddedDataSource" url="jdbc:derby:test;create=true" /> <!-- H2 --> <Resource name="jdbc/datasource3" type="javax.sql.DataSource" auth="Container" driverClassName="org.h2.jdbcx.JdbcDataSource" url="jdbc:h2:~/test" username="sa" password="" /> </Context> Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml. I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific. I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.

    Read the article

  • jquery delegate table rows selector

    - by Jackob
    I have a table which may contains other inner tables (it's not possible to edit generated markup). I want to create a delegate function for row mouseenter and mouseleave which only triggers for the associated main table rows (and not inner tables rows), as following: $("#tableid").delegate("tr", "mouseenter mouseleave", function(e) { //do stuff here }); But with this selector it selects also the inner table rows, so how can I modify the selector to avoid selecting inner table rows?

    Read the article

  • Openlayers - LayerRedraw() / Feature rotation

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • Zend Framework table Relationships

    - by Uffo
    I have a table with over 4 million rows, I want to split this table in more tables, i.e one table with 50k rows. I also need to perform a search on these tables, whats the best way to do it? with JOIN, or? do you have some better ideas? Best Regards,

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Is copying /var/lib/mysql a good alterntive to mysqldump?

    - by kemp
    Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump. are all informations needed contained in that directory? can single tables be imported in another mysql? can there be problems while restoring those files on a (probably slightly) different mysql server version?

    Read the article

  • Perl DBI : How to get schemas

    - by karthi-27
    Hi all,I have using Perl DBI .In that $dbase-tables() will return all the tables in the corresponding database .Like this I want to know the schema's available in the database .Is there any function available for that .

    Read the article

  • What's the best practice to "look up" Java Enums?

    - by Marcus
    We have a REST API where clients can supply parameters representing values defined on the server in Java Enums. So we can provide a descriptive error, we add this lookup method to each Enum. Seems like we're just copying code (bad). Is there a better practice? public enum MyEnum { A, B, C, D; public static MyEnum lookup(String id) { try { return MyEnum.valueOf(id); } catch (IllegalArgumentException e) { throw new RuntimeException("Invalid value for my enum blah blah: " + id); } } } Update: The default error message provided by valueOf(..) would be No enum const class a.b.c.MyEnum.BadValue. I would like to provide a more descriptive error from the API.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >