Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 568/1102 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • SQLite UPSERT - ON DUPLICATE KEY UPDATE

    - by Alix Axel
    MySQL has something like this: INSERT INTO visits (ip, hits) VALUES ('127.0.0.1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1; As far as I'm know this feature doesn't exist in SQLite, what I want to know is if there is any way to archive the same effect without having to execute two queries. Also, if this is not possible, what do you prefer: SELECT + (INSERT or UPDATE) or UPDATE (+ INSERT if UPDATE fails)

    Read the article

  • Unique Keys not recognized by Entity Framework

    - by David Pfeffer
    I have two tables, Reports and Visualizations. Reports has a field, VisualizationID, which points to Visualization's field of the same name via a foreign key. It also has a unique key declared on the field. VisualizationID is not nullable. This means the relationship has to be 0..1 to 1, because every Reports record must have a unique, not null Visualizations record associated with it. The Entity Framework doesn't see it this way. I'm getting the following error: Error 113: Multiplicity is not valid in Role 'Report' in relationship 'FK_Reports_Visualizations'. Because the Dependent Role properties are not the key properties, the upper bound of the multiplicity of the Dependent Role must be *. What's the problem here? How can I make the EF recognize the proper relationship multiplicity?

    Read the article

  • How to create a dynamic Linq Join extension method

    - by Royd Brayshay
    There was a library of dynamic Linq extensions methods released as a sample with VS2008. I'd like to extend it with a Join method. The code below fails with a parameter miss match exception at run time. Can anyone find the problem? public static IQueryable Join(this IQueryable outer, IEnumerable inner, string outerSelector, string innerSelector, string resultsSelector, params object[] values) { if (inner == null) throw new ArgumentNullException("inner"); if (outerSelector == null) throw new ArgumentNullException("outerSelector"); if (innerSelector == null) throw new ArgumentNullException("innerSelector"); if (resultsSelector == null) throw new ArgumentNullException("resultsSelctor"); LambdaExpression outerSelectorLambda = DynamicExpression.ParseLambda(outer.ElementType, null, outerSelector, values); LambdaExpression innerSelectorLambda = DynamicExpression.ParseLambda(inner.AsQueryable().ElementType, null, innerSelector, values); ParameterExpression[] parameters = new ParameterExpression[] { Expression.Parameter(outer.ElementType, "outer"), Expression.Parameter(inner.AsQueryable().ElementType, "inner") }; LambdaExpression resultsSelectorLambda = DynamicExpression.ParseLambda(parameters, null, resultsSelector, values); return outer.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Join", new Type[] { outer.ElementType, inner.AsQueryable().ElementType, outerSelectorLambda.Body.Type, innerSelectorLambda.Body.Type, resultsSelectorLambda.Body.Type }, outer.Expression, inner.AsQueryable().Expression, Expression.Quote(outerSelectorLambda), Expression.Quote(innerSelectorLambda), Expression.Quote(resultsSelectorLambda))); } I've now fixed it myself, here's the answer. Please vote it up or add a better one.

    Read the article

  • Postgres user/role drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • Making flexible C# code in MVC2 for Stored Procedures

    - by cc0
    Thanks to Darin Dimitrov's suggestion I got a big step further in understanding good MVC code, but I'm having some problems making it flexible. I implemented Darin's suggested solution, and it works perfectly for single controllers. However I'm having some trouble implementing it with some flexibility. What I'm looking for is this; To be able to make dynamic column names in json Instead of using "Column1: 'value', ..." and "Column2: 'value', ..." inside the json, I'd like to use for example "id: 'value', ..." and "place: 'value' ..." for one stored procedure, and "animal" and "type" in another (inside the json format). To be able to make dynamic amounts of columns dependent on which stored procedure is called Some stored procedures I'll want to read more than 2 rows from, is there a smart way of accomplishing that? To be able to make numeric (floats and integers) rows from the database be presented inside the json without quotes Like this (name and age); { Column1: "John", Column2: 53 }, I would be very grateful for any feedback and suggestions / code examples I can get here. Even imperfect ones.

    Read the article

  • Update primary key value using entity framework

    - by lemkepf
    I'm trying to update one value of a compound primary key from within the entity framework and I'm getting this error: "The property 'CustomerID' is part of the object's key information and cannot be modified. " Here is my code: Dim customer As Customer = (From c In db.Customer Where c.CustomerID = "xxx" AndAlso c.SiteKey = siteKey).FirstOrDefault customer.CustomerID = "fasdfasdf" db.SaveChanges() It seems too simple. Is it true you can't update a primary key within the entity framework? I can't find any documentation on the topic. Thanks!

    Read the article

  • to_date function pl/sql

    - by christine33990
    undefine dates declare v_dateInput VARCHAR(10); v_dates DATE; begin v_dateInput := &&dates; v_dates := to_date(v_dateInput,'dd-mm-yyyy'); DBMS_OUTPUT.put_line(v_dates); end; Not sure why whenever I run this code with ,for example , input of 03-03-1990, this error shows up. Error report: ORA-01847: day of month must be between 1 and last day of month ORA-06512: at line 6 01847. 00000 - "day of month must be between 1 and last day of month" *Cause: *Action:

    Read the article

  • Common Table Expressions slow when using a table variable

    - by Phil Haselden
    I have been experimenting with the following (simplified) CTE. When using a table variable () the query runs for minutes before I cancel it. Any of the other commented out methods return in less than a second. If I replace the whole WHERE clause with an INNER JOIN it is fast as well. Any ideas why using a table variable would run so slowly? FWIW: The database contains 2.5 million records and the inner query returns 2 records. CREATE TABLE #rootTempTable (RootID int PRIMARY KEY) INSERT INTO #rootTempTable VALUES (1360); DECLARE @rootTableVar TABLE (RootID int PRIMARY KEY); INSERT INTO @rootTableVar VALUES (1360); WITH My_CTE AS ( SELECT ROW_NUMBER() OVER(ORDER BY d.DocumentID) rownum, d.DocumentID, d.Title FROM [Document] d WHERE d.LocationID IN ( SELECT LocationID FROM Location JOIN @rootTableVar rtv ON Location.RootID = rtv.RootID -- VERY SLOW! --JOIN #rootTempTable tt ON Location.RootID = tt.RootID -- Fast --JOIN (SELECT 1360 as RootID) AS rt ON Location.RootID = rt.RootID -- Fast --WHERE RootID = 1360 -- Fast ) ) SELECT * FROM My_CTE WHERE (rownum > 0) AND (rownum <= 100) ORDER BY rownum

    Read the article

  • How can I extend DynamicQuery.cs to implement a .Single method?

    - by Yoenhofen
    I need to write some dynamic queries for a project I'm working on. I'm finding out that a significant amount of time is being spent by my program on the Count and First methods, so I started to change to .Single, only to find out that there is no such method. The code below was my first attempt at creating one (mostly copied from the Where method), but it's not working. Help? public static object Single(this IQueryable source, string predicate, params object[] values) { if (source == null) throw new ArgumentNullException("source"); if (predicate == null) throw new ArgumentNullException("predicate"); LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, typeof(bool), predicate, values); return source.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Single", new Type[] { source.ElementType }, source.Expression, Expression.Quote(lambda))); }

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Dealing with huge SQL resultset

    - by Dave McClelland
    I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory. For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable. My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time? Cheers, Dave McClelland

    Read the article

  • ODBC Connection String Problem

    - by Brett
    Hi there, I am having major trouble connecting to my database via ODBC. The db is local (but I have a mirror on a virtual machine), so I am trying to use the connectionstring: Dsn=MonetDB;host=TARBELL where TARBELL is the name of my computer. However, it doesn't connect. BUT, this string does: Dsn=MonetDB;host=localhost as does Dsn=MonetDB Can anyone explain this? I am at a complete loss. I have taken down my firewalls (at least until I get this figured out), so that can't be the problem. I eventually want to change the TARBELL to the mirrored virtual machine running another instance of the database. Many thanks, Brett

    Read the article

  • How to get the millisecond value from a Timestamp field in firebird with Delphi 2007

    - by Re0sless
    I have a Firebird database (running on server version 2.1.3) and am connecting to it with Delphi 2007 using the DBExpress objects (using the Interbase driver) One of my tables in the database looks something like this CREATE TABLE MYTABLE ( MYDATE Timestamp NOT NULL, MYINDEX Integer NOT NULL, ... Snip ... PRIMARY KEY (MYDATE ,MYINDEX) ); I can add to the table OK, and in Flame Robin it shows the timestamp field as having a millisecond value. But when I do a select all (select * from MYTABLE) on the table I can not get the millisecond value, as it is always returned as 000. This causes major problems as it is part of the primary key (unfortunately I didn't design the table and don't have authority to change it). I have tried the following to get the millisecond value: sql1.fieldbyname('MYDATE').AsDateTime; sql1.fieldbyname('MYDATE').AsSQLTimeStamp; sql1.fieldbyname('MYDATE').AsStirng; sql1.fieldbyname('MYDATE').AsFloat; But they all return 14/09/2009 14:25:06.000 when formatted. How do I retrieve the millisecond from a timestamp? UPDATE: In case this helps anyone in the future, here are the drivers I tried for DBExpress and the results. Embarcadero - dbExpress Driver for Firebird (Delphi 2010 Trial Version) - Milliseconds not supported in timestamps. Chau Chee Yang's - dbExpress Driver for Firebird (Delphi 2007) - Milliseconds not supported in timestamps. UpScene - InterXpress for Firebird (Delphi 2007) - Milliseconds are supported in timestamps. DevArt - dbExpress Driver for InterBase (Delphi 2007) - Milliseconds are supported in timestamps.

    Read the article

  • RDLC: How to print multiple tables in one report

    - by Eduardo Molteni
    I'm generating the RDLC XML schema and showing the report in the ReportViewer control. No problems there. Now, I want a report with 2 tables, with 2 differents dataset. Something like this gets generated: <Body> <ReportItems> <Table Name="Table1"> .... </Table> <Table Name="Table2"> .... </Table> </ReportItems> </Body> But, when printed, both tables start from the top, printing one table over the other (not nice) Is there a way to tell that Table2 should start after Table1? Update: I've tried with List with a fake DataSource, but it does not work.

    Read the article

  • SSIS Send Mail task limited to 255 characters in address?

    - by CodeByMoonlight
    Is this a bug, or some hidden limit I can't find any documentation about? When creating a Send Mail Task in SSIS 2008, the TO, CC and BCC fields seem to have a hidden limit of 255 characters. I'm aware this is the standard limit for individual email addresses, but all three are commonly used for multiple addresses and the comment for the To field even says "separate the recipients with semicolons". But nevertheless, it truncates the address to a maximum of 255 characters. Bug, non-obvious standard, or something I'm missing? Any way around this? We were trying to build a CC list dynamically, but this has caused a rethink.

    Read the article

  • Database design for summarized data

    - by holden
    I have a new table I'm going to add to a bunch of other summarized data, basically to take some of the load off by calculating weekly avgs. My question is whether I would be better off with one model over the other. One model with days of the week as a column with an additional column for price or another model as a series of fields for the DOW each taking a price. I'd like to know which would save me in speed and/or headaches? Or at least the trade off. IE. ID OBJECT_ID MON TUE WED THU FRI SAT SUN SOURCE OR ID OBJECT_ID DAYOFWEEK PRICE SOURCE

    Read the article

  • ibatis isNotEmpty with multiple variables

    - by hibernate
    Suppose I have a massive table called inactiveUsers and a search form. I want to conditionally join the inactiveUsers table if any user related characteristic is chosen (address, name, phoneNumber, etc...). Is there any way to do this without the following: <isNotEmpty property="address">JOIN inactiveUsers</isNotEmpty> <isNotEmpty property="phoneNumber">JOIN inactiveUsers</isNotEmpty> <isNotEmpty property="name">JOIN inactiveUsers</isNotEmpty> and so on for another 10-20 isNotEmpty clauses. I would like to do something like: <isAnyNotEmpty properties="address, phoneNumber, name, ....">JOIN inactiveUsers</isNotEmpty> Is this possible with ibatis? If so, how?

    Read the article

  • Silverlight: After adding table no datacontext classes are available for the new DomainService class

    - by Gabriel
    I'm using a Silverlight 4 WCF RIA Services demo application that uses LinqToSql, that works well. I add a new database table, move the new table to the LinqToSql designer and build the project I add a new DomainService class I get a dialog with the only option to create an empty DomainService and no DataContext classes are available. What am I missed? Thanks in advance Gabriel

    Read the article

  • Check username and password in LINQ query

    - by b0x0rz
    this linq query var users = from u in context.Users where u.UserEMailAdresses.Any(e1 => e1.EMailAddress == userEMail) && u.UserPasswords.Any(e2 => e2.PasswordSaltedHash == passwordSaltedHash) select u; return users.Count(); returns: 1 even when there is nothing in password table. how come? what i am trying to do is get the values of email and passwordHash from two separate tables (UserEMailAddresses and UserPasswords) linked via foreign keys to the third table (Users). it should be simple - checking if email and password mach from form to database. but it is not working for me. i get 1 (for count) even when there are NO entries in the UserPasswords table. is the linq query above completely wrong, or...?

    Read the article

  • C#: Fill DataGridView From Anonymous Linq Query

    - by mdvaldosta
    // From my form BindingSource bs = new BindingSource(); private void fillStudentGrid() { bs.DataSource = Admin.GetStudents(); dgViewStudents.DataSource = bs; } // From the Admin class public static List<Student> GetStudents() { DojoDBDataContext conn = new DojoDBDataContext(); var query = (from s in conn.Students select new Student { ID = s.ID, FirstName = s.FirstName, LastName = s.LastName, Belt = s.Belt }).ToList(); return query; } I'm trying to fill a datagridview control in Winforms, and I only want a few of the values. The code compiles, but throws a runtime error: Explicit construction of entity type 'DojoManagement.Student' in query is not allowed. Is there a way to get it working in this manner?

    Read the article

  • Wordpress SQL Select Multiple Meta Values / Meta Keys / Custom Fields

    - by Wes
    I am trying to modify a wordpress / MySQL function to display a little more information. I'm currently running the following query that selects the post, joins the 'postmeta' and gets the info where the meta_key = _liked function most_liked_posts($numberOf, $before, $after, $show_count) { global $wpdb; $request = "SELECT ID, post_title, meta_value FROM $wpdb->posts, $wpdb->postmeta"; $request .= " WHERE $wpdb->posts.ID = $wpdb->postmeta.post_id"; $request .= " AND post_status='publish' AND post_type='post' AND meta_key='_liked' "; $request .= " ORDER BY $wpdb->postmeta.meta_value+0 DESC LIMIT $numberOf"; $posts = $wpdb->get_results($request); foreach ($posts as $post) { $post_title = stripslashes($post->post_title); $permalink = get_permalink($post->ID); $post_count = $post->meta_value; echo $before.'<a href="' . $permalink . '" title="' . $post_title.'" rel="nofollow">' . $post_title . '</a>'; echo $show_count == '1' ? ' ('.$post_count.')' : ''; echo $after; } } The important part being: $post_count = $post->meta_value; But now I want to also grab a value that is attached to each post called wbphoto How do I specify that $post_count = _liked and $photo = wbphoto ? Here is a screen cap of my Phpmyadmin

    Read the article

  • Error using Dynamic Data Filtering: missing datasource

    - by sebastiaan
    I am trying to use the ASP.NET Dynamic Data Filtering project, but I'm running into a problem during the configuration. I'm following the instructions on the author's blog, and everything works like described. Then it tells me to change the datasource using the designer view. I am told to select the "GridDataSource" in the "Configure data source" wizard. This option is not there though. I get all of the classes in my project, including the DataContext that was generated by Linq. When I choose "Show only DataContext objects", the dropdown ("Choose your context object:") is completely empty. When I turn of the checkbox and choose my DataContext class, I get asked which table I want and all that. But, as the whole purpose of a Dynamic Data site is NOT to use one single table, that's not much help. So I've looked at the instructions again and copied the resulting datasource from the example: <asp:DynamicLinqDataSource ID="GridDataSource" runat="server" EnableDelete="True" EnableUpdate="True"></asp:DynamicLinqDataSource> Which is exactly what I had, without the "WhereParameters" nodes in there. Now, when I run the list page however, I get an exception about a missing datasource from the filtering component. Of course when I remove the DynamicFilterRepeater, it works again. This is the meat of the exception: [InvalidOperationException: Missing DataSource] Catalyst.Web.DynamicData.DynamicFilterRepeater.GetTable() in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:74 Catalyst.Web.DynamicData.DynamicFilterRepeater.GetFilters() in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:81 Catalyst.Web.DynamicData.DynamicFilterRepeater.OnInit(EventArgs e) in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:106 How do I make the DynamicFilterRepeater recognize my datasource? I'm using VS2010 Pro, on a Win7 machine.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >