Search Results

Search found 1619 results on 65 pages for 'itai alter'.

Page 19/65 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to get SQL Function run a different query and return value from either query?

    - by RoguePlanetoid
    I need a function, but cannot seem to get it quite right, I have looked at examples here and elsewhere and cannot seem to get this just right, I need an optional item to be included in my query, I have this query (which works): SELECT TOP 100 PERCENT SKU, Description, LEN(CONVERT(VARCHAR (1000),Description)) AS LenDesc FROM tblItem WHERE Title = @Title AND Manufacturer = @Manufacturer ORDER BY LenDesc DESC This works within a Function, however the Manufacturer is Optional for this search - which is to find the description of a similar item, if none is present, the other query is: SELECT TOP 100 PERCENT SKU, Description, LEN(CONVERT(VARCHAR (1000),Description)) AS LenDesc FROM tblItem WHERE Title = @Title ORDER BY LenDesc DESC Which is missing the Manufacturer, how to I get my function to use either query based on the Manufacturer Value being present or not. The reason is I will have a function which first checks an SKU for a Description, if it is not present - it uses this method to get a Description from a Similar Product, then updates the product being added with the similar product's description. Here is the function so far: ALTER FUNCTION [dbo].[GetDescriptionByTitleManufacturer] ( @Title varchar(400), @Manufacturer varchar(160) ) RETURNS TABLE AS RETURN ( SELECT TOP 100 PERCENT SKU, Description, LEN(CONVERT(VARCHAR (1000),Description)) AS LenDesc FROM tblItem WHERE Title = @Title AND Manufacturer = @Manufacturer ORDER BY LenDesc DESC ) I've tried adding BEGINs and IF...ELSEs but get errors or syntax problems each way I try it, I want to be able to do something like this pseudo-function (which does not work): ALTER FUNCTION [dbo].[GetDescriptionByTitleManufacturer] ( @Title varchar(400), @Manufacturer varchar(160) ) RETURNS TABLE AS BEGIN IF (@Manufacturer = Null) RETURN ( SELECT TOP 100 PERCENT SKU, Description, LEN(CONVERT(VARCHAR (1000),Description)) AS LenDesc FROM tblItem WHERE Title = @Title ORDER BY LenDesc DESC ) ELSE RETURN ( SELECT TOP 100 PERCENT SKU, Description, LEN(CONVERT(VARCHAR (1000),Description)) AS LenDesc FROM tblItem WHERE Title = @Title AND Manufacturer = @Manufacturer ORDER BY LenDesc DESC ) END

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • rake test not copying development postgres db with sequences

    - by Robert Crida
    I am trying to develop a rails application on postgresql using a sequence to increment a field instead of a default ruby approach based on validates_uniqueness_of. This has proved challenging for a number of reasons: 1. This is a migration of an existing table, not a new table or column 2. Using parameter :default = "nextval('seq')" didn't work because it tries to set it in parenthesis 3. Eventually got migration working in 2 steps: change_column :work_commencement_orders, :wco_number_suffix, :integer, :null => false#, :options => "set default nextval('wco_number_suffix_seq')" execute %{ ALTER TABLE work_commencement_orders ALTER COLUMN wco_number_suffix SET DEFAULT nextval('wco_number_suffix_seq'); } Now this would appear to have done the correct thing in the development database and the schema looks like: wco_number_suffix | integer | not null default nextval('wco_number_suffix_seq'::regclass) However, the tests are failing with PGError: ERROR: null value in column "wco_number_suffix" violates not-null constraint : INSERT INTO "work_commencement_orders" ("expense_account_id", "created_at", "process_id", "vo2_issued_on", "wco_template", "updated_at", "notes", "process_type", "vo_number", "vo_issued_on", "vo2_number", "wco_type_id", "created_by", "contractor_id", "old_wco_type", "master_wco_number", "deadline", "updated_by", "detail", "elective_id", "authorization_batch_id", "delivery_lat", "delivery_long", "operational", "state", "issued_on", "delivery_detail") VALUES(226, '2010-05-31 07:02:16.764215', 728, NULL, E'Default', '2010-05-31 07:02:16.764215', NULL, E'Procurement::Process', NULL, NULL, NULL, 226, NULL, 276, NULL, E'MWCO-213', '2010-06-14 07:02:16.756952', NULL, E'Name 4597', 220, NULL, NULL, NULL, 'f', E'pending', NULL, E'728 Test Road; Test Town; 1234; Test Land') RETURNING "id" The explanation can be found when you inspect the schema of the test database: wco_number_suffix | integer | not null So what happened to the default? I tried adding task: template: smmt_ops_development to the database.yml file which has the effect of issuing create database smmt_ops_test template = "smmt_ops_development" encoding = 'utf8' I have verified that if I issue this then it does in fact copy the default nextval. So clearly rails is doing something after that to suppress it again. Any suggestions as to how to fix this? Thanks Robert

    Read the article

  • oracle sequence init

    - by gospodin
    I wanted to export 3 tables from db1 into db2. Before the export starts, I will create the sequences for those 3 tables. CREATE SEQUENCE TEST_SEQ START WITH 1 INCREMENT BY 1; After the export, I will reinitialize sequnce values to match the max(id) + 1 from the table. CREATE OR REPLACE PROCEDURE "TEST_SEQUENCE" AUTHID CURRENT_USER is v_num number; begin select max(ID) into v_num from TABLE_1; EXECUTE IMMEDIATE 'ALTER SEQUENCE TEST_SEQ INCREMENT BY ' || v_num; EXECUTE IMMEDIATE 'ALTER SEQUENCE 1TEST_SEQ INCREMENT BY 1'; end; / show errors; execute TEST_SEQ; This procedure compiles and executes without problems. But when I want to check t he last value of the sequence, like select TEST_SEQ.nextval from dual; I still get the "1". Can someone tell me why my procedure did not impact my sequence? ps. I am using oracle sql developper to pass sql. Thanks

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Writing/Reading struct w/ dynamic array through pipe in C

    - by anrui
    I have a struct with a dynamic array inside of it: struct mystruct{ int count; int *arr; }mystruct_t; and I want to pass this struct down a pipe in C and around a ring of processes. When I alter the value of count in each process, it is changed correctly. My problem is with the dynamic array. I am allocating the array as such: mystruct_t x; x.arr = malloc( howManyItemsDoINeedToStore * sizeof( int ) ); Each process should read from the pipe, do something to that array, and then write it to another pipe. The ring is set up correctly; there's no problem there. My problem is that all of the processes, except the first one, are not getting a correct copy of the array. I initialize all of the values to, say, 10 in the first process; however, they all show up as 0 in the subsequent ones. for( j = 0; j < howManyItemsDoINeedToStore; j++ ){ x.arr[j] = 10; } Initally: 10 10 10 10 10 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 Now, if I alter my code to, say, struct mystruct{ int count; int arr[10]; }mystruct_t; everything is passed correctly down the pipe, no problem. I am using READ and WRITE, in C: write( STDOUT_FILENO, &x, sizeof( mystruct_t ) ); read( STDIN_FILENO, &x, sizeof( mystruct_t ) ); Any help would be appreciated. Thanks in advance!

    Read the article

  • Java: How to workaround the lack of Equatable interface?

    - by java.is.for.desktop
    Hello, everyone! As far as I know, things such as SortedMap or SortedSet, use compareTo (rather than equals) on Comparable<?> types for checking equality (contains, containsKey). But what if certain types are equatable by concept, but not comparable? I have to declare a Comparator<?> and override the method int compareTo(T o1, To2). OK, I can return 0 for instances which are considered equal. But, for unqeual instances, what do I return when an order is not evident? Is the approach of using SortedMap or SortedSet on equatable but (by concept) not comparable types good anyway? Thank you! EDIT: I don't want to store things sorted, but would I use "usual" Map and Set, I couldn't "override" the equality-behavior. EDIT 2: Why I can't just override equals(...): I need to alter the equality-behavior of a foreign class. Can't edit it. EDIT 3: Just think of .NET: They have IEquatable interface which cat alter the equality-behavior without touching the comparable behavior.

    Read the article

  • How do I debug into an ILMerged assembly?

    - by Rory Becker
    Summary I want to alter the build process of a 2-assembly solution, such that a call to ILMerge is invoked, and the build results in a single assembly. Further I would like to be able to debug into the resultant assembly. Preparation - A simple example New Solution - ClassLibrary1 Create a static function 'GetMessage' in Class1 which returns the string "Hello world" Create new console app which references the ClassLibrary. Output GetMessage from main() via the console. You now have a 2 assembly app which outputs "Hello World" to the console. So what next..? I would like to alter the Console app build process, to include a post build step which uses ILMerge, to merge the ClassLibrary assembly into the Console assembly After this step I should be able to: Run the Console app directly with no ClassLibrary1.dll present Run the Console app via F5 (or F11) in VS and be able to debug into each of the 2 projects. Limited Success I read this blogpost and managed to achieve the merge I was after with a post-build command of... "$(ProjectDir)ILMerge.bat" "$(TargetDir)" $(ProjectName) ...and an ILMerge.bat file which read... CD %1 Copy %2.exe temp.exe ILMerge.exe /out:%2.exe temp.exe ClassLibrary1.dll Del temp.exe Del ClassLibrary1.* This works fairly well, and does in fact produce an exe which runs outside the VS environment as required. However it does not appear to produce symbols (.pdb file) which VS is able to use in order to debug into the code. I think this is the last piece of the puzzle. Does anyone know how I can make this work? FWIW I am running VS2010 on an x64 Win7 x64 machine.

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • SQL Server problems reading columns with a foreign key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately By 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) I got a foreign key on ProductGroups ALTER TABLE [dbo].[Article] WITH CHECK ADD CONSTRAINT [FK_ProductGroup_Articles] FOREIGN KEY([ProductGroupID]) REFERENCES [dbo].[ProductGroup] ([ProductGroupID]) GO ALTER TABLE [dbo].[Article] CHECK CONSTRAINT [FK_ProductGroup_Articles] there are some 6000 rows and IsDiscontinued is a bit, not null, but leaving this condition out does not change the outcome. Anyone can tell me how to handle such a situation? More info, anyone? Additional Info: this does not seem to be restricted to this Foreign Key, but all/some referencing this entity.

    Read the article

  • django + south + python: strange behavior when using a text string received as a parameter in a func

    - by carlosescri
    Hello, this is my first question. I'm trying to execute a SQL query in django (south migration): from django.db import connection # ... class Migration(SchemaMigration): # ... def transform_id_to_pk(self, table): try: db.delete_primary_key(table) except: pass finally: cursor = connection.cursor() # This does not work cursor.execute('SELECT MAX("id") FROM "%s"', [table]) # I don't know if this works. try: minvalue = cursor.fetchone()[0] except: minvalue = 1 seq_name = table + '_id_seq' db.execute('CREATE SEQUENCE "%s" START WITH %s OWNED BY "%s"."id"', [seq_name, minvalue, table]) db.execute('ALTER TABLE "%s" ALTER COLUMN id SET DEFAULT nextval("%s")', [table, seq_name + '::regclass']) db.create_primary_key(table, ['id']) # ... I use this function like this: self.transform_id_to_pk('my_table_name') So it should: Find the biggest existent ID or 0 (it crashes) Create a sequence name Create the sequence Update the ID field to use sequence Update the ID as PK But it crashes and the error says: File "../apps/accounting/migrations/0003_setup_tables.py", line 45, in forwards self.delegation_table_setup(orm) File "../apps/accounting/migrations/0003_setup_tables.py", line 478, in delegation_table_setup self.transform_id_to_pk('accounting_delegation') File "../apps/accounting/migrations/0003_setup_tables.py", line 20, in transform_id_to_pk cursor.execute(u'SELECT MAX("id") FROM "%s"', [table.encode('utf-8')]) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "E'accounting_delegation'" does not exist LINE 1: SELECT MAX("id") FROM "E'accounting_delegation'" ^ I have shortened the file paths for convenience. What does that "E'accounting_delegation'" mean? How could I get rid of it? Thank you! Carlos.

    Read the article

  • A potentially dangerous Request.Form value was detected: Dealing with these errors proactively, or a

    - by Albert
    I'm noticing this error more and more in my error logs. I've read through the questions here talking about this error, but they don't address what I would like to do (see below). I'm considering three options, in the order of preference: 1) When submitting a form (I use formviews almost exclusively, if that helps), if potentially dangerous characters are detected, automatically strip them out and submit. 2) When submitting a form, if potentially dangerous characters are detected, alert the user and let them fix it before trying again. 3) After the exception is generated, deal with it and alert the user. I'm hoping one of the first two options might be able to do somewhat globally...I know for the 3rd I'd have to alter a TON of Try-Catch blocks I already have in place. Doable, but labor intensive. I'd rather be proactive about it if at all possible and avoid the exception all together. Perhaps one approach to #1 would be to write a block of code that could loop through all text entry fields in a formview, during the insert/update event, and strip the characters out. I'm ok with that, but I'd rather not have to heavily alter all my Insert/Update events to accomplish this. Or maybe I just create a different class to do the text checking/deleting, and only insert 1 line of code in each Insert/Update event. If anyone can come up with some example code of any of these approaches that would be a help. Thanks for any ideas or information. I'm definitely open to other solutions too; these are only the 3 that came to mind. I can say that I don't want to turn request validation off though.

    Read the article

  • MSSQL: Views that use SELECT * need to be recreated if the underlying table changes

    - by cbp
    Is there a way to make views that use SELECT * stay in sync with the underlying table. What I have discovered is that if changes are made to the underlying table, from which all columns are to be selected, the view needs to be 'recreated'. This can be achieved simly by running an ALTER VIEW statement. However this can lead to some pretty dangerous situations. If you forgot to recreate the view, it will not be returning the correct data. In fact it can be returning seriously messed up data - with the names of the columns all wrong and out of order. Nothing will pick up that the view is wrong unless you happened to have it covered by a test, or a data integrity check fails. For example, Red Gate SQL Compare doesn't pick up the fact that the view needs to be recreated. To replicate the problem, try these statements: CREATE TABLE Foobar (Bar varchar(20)) CREATE VIEW v_Foobar AS SELECT * FROM Foobar INSERT INTO Foobar (Bar) VALUES ('Hi there') SELECT * FROM v_Foobar ALTER TABLE Foobar ADD Baz varchar(20) SELECT * FROM v_Foobar DROP VIEW v_Foobar DROP TABLE Foobar I am tempted to stop using SELECT * in views, which will be a PITA. Is there a setting somewhere perhaps that could fix this behaviour?

    Read the article

  • A way to edit content by altering one file?

    - by Chris
    Hi, I have a contact css tab on my left side on my website, I have more then 30 pages and I don't wantto manually alter all those pages later when data had changed. Does anyone knows a sollution so I only have to alter 1 file to have all pages edited? Perhaps in javascript? The code below is for the tab <div class="slide-out-div"> <a class="handle" href="http://link-for-non-js-users">Content</a> <h3>Onze contact gegevens</h3> <p>Adres: van Ostadestraat 55<br /> Postcode: 8932 JZ<br /> Plaats: Leeuwarden<br /> Tel: 058 844 66 28<br /> Mob: 0629594595 <br /> E-mail: <a href="mailto:[email protected]">[email protected]</a><br /><br /> </p> <p>Mocht u vragen hebben dan kunt u gerust bij ons terecht voor meer informatie.</p>

    Read the article

  • Symfony generating database from model

    - by Sergej Jevsejev
    Hello, I am having troubles generating a simple database form model. I am using: Doctrine on Symfony 1.4.4 MySQL Workbench 5.2.16 with Doctrine Export 0.4.2dev So my ERL Model is: http://img708.imageshack.us/img708/1716/tmg.png Genereted YAML file: --- detect_relations: true options: collate: utf8_unicode_ci charset: utf8 type: InnoDB Course: columns: id: type: integer(4) primary: true notnull: true autoincrement: true name: type: string(255) notnull: true keywords: type: string(255) notnull: true summary: type: clob(65535) notnull: true Lecture: columns: id: type: integer(4) primary: true notnull: true autoincrement: true course_id: type: integer(4) primary: true notnull: true name: type: string(255) notnull: true description: type: string(255) notnull: true url: type: string(255) relations: Course: class: Course local: course_id foreign: id foreignAlias: Lectures foreignType: many owningSide: true User: columns: id: type: integer(4) primary: true unique: true notnull: true autoincrement: true firstName: type: string(255) notnull: true lastName: type: string(255) notnull: true email: type: string(255) unique: true notnull: true designation: type: string(1024) personalHeadline: type: string(1024) shortBio: type: clob(65535) UserCourse: tableName: user_has_course columns: user_id: type: integer(4) primary: true notnull: true course_id: type: integer(4) primary: true notnull: true relations: User: class: User local: user_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true Course: class: Course local: course_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true And no matter what I try this error occurs after: symfony doctrine:build --all --no-confirmation SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'user_userid' doesn't exist in table. Failing Query: "ALTER TABLE user_has_course ADD CONSTRAINT user_has_course_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id)". Failing Query: ALTER TABLE user_has_course ADD CONSTRAINT user_has_cou rse_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id) Currently I am studying Symfony, and stuck with this error. Please help.

    Read the article

  • How to write a JOIN statement to combine data from disparate tables

    - by Amarundo
    I have the following 2 procedures that I use as my source for a report. As of now, I'm presenting 2 different tables in my SQL Server Reporting Services 2008 R2 report, because it doesn't let me put them together as they belong to 2 different data sets. I want to present them in a single table, but I have not been successful trying to use JOIN here. How do I do that? NOTE: cName in IAgentQueueStats corresponds to UserId in AgentActivityLog. /*** Aggregate values for Call Center Agents for calls, talk and hold time ***/ /*** The detail/row values is per 30-minute interval ***/ ALTER PROCEDURE [dbo].[sp_IAgentQueueStats_OnlyCalls_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [cName] ,sum([nAnswered]) SumNAnswered ,sum([nAnsweredAcd]) SumNAnsweredAcd ,sum([tTalkAcd]) SumTTalkAcd ,sum([nHoldAcd]) SumNHoldAcd ,sum([tHoldAcd]) SumTHoldAcd ,sum([tAcw]) SumTAcw FROM [I3_IC].[dbo].[IAgentQueueStats] WHERE dIntervalStart between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( cName ,@p_Agents)> 0 AND cReportGroup <> '*' AND cHKey3 = '*' and cHKey4 ='*' AND nEnteredAcd > 0 AND cReportGroup <> 'CCFax Email' GROUP BY cName And here is the second one: /*** Aggregate values for Call Center Agents for status/activity time ***/ /*** The detail/row values is per start-time/end-time ***/ ALTER PROCEDURE [dbo].[sp_AgentActivity_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [UserId],[StatusCategory],SUM([StateDuration]) [StatusDuration] FROM ( SELECT [UserId] ,[StatusGroup] ,[StatusKey] , CASE [StatusKey] WHEN 'Available' THEN 'Productive' WHEN 'Follow Up' THEN 'Productive' WHEN 'Campaign Call' THEN 'Productive' WHEN 'Awaiting Callback' THEN 'Productive' WHEN 'In a Meeting' THEN 'Not Your Fault' WHEN 'Project Work' THEN 'Not Your Fault' WHEN 'At a Training Session'THEN 'Not Your Fault' WHEN 'System Issues' THEN 'Not Your Fault' WHEN 'Test' THEN 'Not Your Fault' WHEN 'At Lunch' THEN 'Non Productive' WHEN 'Available, Forward' THEN 'Non Productive' WHEN 'Available, Follow-Me' THEN 'Non Productive' WHEN 'At Play' THEN 'Non Productive' WHEN 'AcdAgentNotAnswering' THEN 'Non Productive' WHEN 'Do Not Disturb' THEN 'Non Productive' WHEN 'Available, No ACD' THEN 'Non Productive' WHEN 'Away from desk' THEN 'Non Productive' ELSE [StatusKey] END StatusCategory ,stateduration FROM [I3_IC].[dbo].[AgentActivityLog] WHERE [StatusDateTime] between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( [UserId] ,@p_Agents)> 0 AND [StatusKey] not in ('Gone Home','Out of the Office','On Vacation','Out of Town') ) a GROUP BY [UserId],[StatusCategory] ORDER BY [UserId], [StatusCategory] desc BTW, if I take some time to comment/reply on your posts, it's not lack of interest, but of understanding...

    Read the article

  • Entity Framework Decorator Pattern

    - by Anthony Compton
    In my line of business we have Products. These products can be modified by a user by adding Modifications to them. Modifications can do things such as alter the price and alter properties of the Product. This, to me, seems to fit the Decorator pattern perfectly. Now, envision a database in which Products exist in one table and Modifications exist in another table and the database is hooked up to my app through the Entity Framework. How would I go about getting the Product objects and the Modification objects to implement the same interface so that I could use them interchangeably? For instance, the kind of things I would like to be able to do: Given a Modification object, call .GetNumThings(), which would then return the number of things in the original object, plus or minus the number of things added by the modification. This question may be stemming from a pretty serious lack of exposure to the nitty-gritty of EF (all of my experience so far has been pretty straight-forward LOB Silverlight apps), and if that's the case, please feel free to tell me to RTFM. Thanks in advance!

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • UpdateModelFromDatabaseException when trying to add a table to Entity Framework model

    - by Agent_9191
    I'm running into a weird issue with Entity Framework in .NET 3.5 SP1 within Visual Studio 2008. I created a database with a few tables in SQL Server and then created the associated .edmx Entity Framework model and had no issues. I then created a new table in the database that has a foreign key to an existing table and needed to be added to the .edmx. So I opened the .edmx in Visual Studio and in the models right-clicked and chose "Update Model From Database...". I saw the new table in the "add" tab, so I checked it and clicked finish. However I get an error message with the following text: --------------------------- Microsoft Visual Studio --------------------------- An exception of type 'Microsoft.Data.Entity.Design.Model.Commands.UpdateModelFromDatabaseException' occurred while attempting to update from the database. The exception message is: 'Cannot update from the database. Cannot resolve the Name Target for ScalarProperty 'ID <==> CustomerID'.'. --------------------------- OK --------------------------- For reference, here's the tables seem to be the most pertinent to the error. CustomerPreferences already exists in the .edmx. Diets is the table that was added afterwards and trying to add to the .edmx. CREATE TABLE [dbo].[CustomerPreferences]( [ID] [uniqueidentifier] NOT NULL, [LastUpdatedTime] [datetime] NOT NULL, [LastUpdatedBy] [uniqueidentifier] NOT NULL, PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[Diets]( [ID] [uniqueidentifier] NOT NULL, [CustomerID] [uniqueidentifier] NOT NULL, [Description] [nvarchar](50) NOT NULL, [LastUpdatedTime] [datetime] NOT NULL, [LastUpdatedBy] [uniqueidentifier] NOT NULL, PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Diets] WITH CHECK ADD CONSTRAINT [FK_Diets_CustomerPreferences] FOREIGN KEY([CustomerID]) REFERENCES [dbo].[CustomerPreferences] ([ID]) GO ALTER TABLE [dbo].[Diets] CHECK CONSTRAINT [FK_Diets_CustomerPreferences] GO This seems like a fairly common use case, so I'm not sure where I'm going wrong.

    Read the article

  • Setting iphone to vibrate and setting iphone back to sound via app.

    - by Cadu
    Folks, I need your knowledge here. Think about the following sittuation - my app need to set my iphone to vibrate mode on a certain time and get it back to playing sounds mode (for call receiving, sms, email, all common sound notifications) some minutes later. I've already googled that, and didn't find a good, apple accetable way of doing that: http://stackoverflow.com/questions/736047/possible-to-programmatically-open-settings-app-from-iphone http://stackoverflow.com/questions/702319/is-it-possible-to-dynamically-alter-an-iphone-apps-settings-page-in-the-settings http://stackoverflow.com/questions/1141391/display-iphone-application-settings-within-your-application http://stackoverflow.com/questions/335965/how-do-i-launch-my-settings-bundle-from-my-application [This one here is interesting, as fas I as find a way to know what is the key for the settings I'm interested in] http://stackoverflow.com/questions/335965/how-do-i-launch-my-settings-bundle-from-my-application [It mentions I can do that, but does not give an idea of how =(] http://stackoverflow.com/questions/702319/is-it-possible-to-dynamically-alter-an-iphone-apps-settings-page-in-the-settings [If this is true, I wouldn't be able to do what I want...] Does anyone there has an idea of how do I do that via app? Many thanks in advance.

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • MYSQL variables - SET @var

    - by Lizard
    I am attempting to create a mysql snippet that will analyse a table and remove duplicate entries (duplicates are based on two fields not entire record) I have the following code that works when I hard code the variables in the queries, but when I take them out and put them as variables I get mysql errors, below is the script SET @tblname = 'mytable'; SET @fieldname = 'myfield'; SET @concat1 = 'checkfield1'; SET @concat2 = 'checkfield2'; ALTER TABLE @tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL; UPDATE @tblname SET `tmpcheck` = CONCAT(@concat1,'-',@concat2); CREATE TEMPORARY TABLE `tmp_table` ( `tmpfield` VARCHAR( 100 ) NOT NULL ) ENGINE = MYISAM ; INSERT INTO `tmp_table` (`tmpfield`) SELECT @fieldname FROM @tblname GROUP BY `tmpcheck` HAVING ( COUNT(`tmpcheck`) > 1 ); DELETE FROM @tblname WHERE @fieldname IN (SELECT `tmpfield` FROM `tmp_table`); ALTER TABLE @tblname DROP `tmpcheck`; I am getting the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL' at line 1 Is this because I can't use a variable for a table name? What else could be wrong or how wopuld I get around this issue. Thanks in adavnce

    Read the article

  • Python "callable" attribute (pseudo-property)

    - by mgilson
    In python, I can alter the state of an instance by directly assigning to attributes, or by making method calls which alter the state of the attributes: foo.thing = 'baz' or: foo.thing('baz') Is there a nice way to create a class which would accept both of the above forms which scales to large numbers of attributes that behave this way? (Shortly, I'll show an example of an implementation that I don't particularly like.) If you're thinking that this is a stupid API, let me know, but perhaps a more concrete example is in order. Say I have a Document class. Document could have an attribute title. However, title may want to have some state as well (font,fontsize,justification,...), but the average user might be happy enough just setting the title to a string and being done with it ... One way to accomplish this would be to: class Title(object): def __init__(self,text,font='times',size=12): self.text = text self.font = font self.size = size def __call__(self,*text,**kwargs): if(text): self.text = text[0] for k,v in kwargs.items(): setattr(self,k,v) def __str__(self): return '<title font={font}, size={size}>{text}</title>'.format(text=self.text,size=self.size,font=self.font) class Document(object): _special_attr = set(['title']) def __setattr__(self,k,v): if k in self._special_attr and hasattr(self,k): getattr(self,k)(v) else: object.__setattr__(self,k,v) def __init__(self,text="",title=""): self.title = Title(title) self.text = text def __str__(self): return str(self.title)+'<body>'+self.text+'</body>' Now I can use this as follows: doc = Document() doc.title = "Hello World" print (str(doc)) doc.title("Goodbye World",font="Helvetica") print (str(doc)) This implementation seems a little messy though (with __special_attr). Maybe that's because this is a messed up API. I'm not sure. Is there a better way to do this? Or did I leave the beaten path a little too far on this one? I realize I could use @property for this as well, but that wouldn't scale well at all if I had more than just one attribute which is to behave this way -- I'd need to write a getter and setter for each, yuck.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >