Search Results

Search found 10060 results on 403 pages for 'column'.

Page 195/403 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Multi-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • How to represent and insert into an ordered list in SQL?

    - by Travis
    I want to represent the list "hi", "hello", "goodbye", "good day", "howdy" (with that order), in a SQL table: pk | i | val ------------ 1 | 0 | hi 0 | 2 | hello 2 | 3 | goodbye 3 | 4 | good day 5 | 6 | howdy 'pk' is the primary key column. Disregard its values. 'i' is the "index" that defines that order of the values in the 'val' column. It is only used to establish the order and the values are otherwise unimportant. The problem I'm having is with inserting values into the list while maintaining the order. For example, if I want to insert "hey" and I want it to appear between "hello" and "goodbye", then I have to shift the 'i' values of "goodbye" and "good day" (but preferably not "howdy") to make room for the new entry. So, is there a standard SQL pattern to do the shift operation, but only shift the elements that are necessary? (Note that a simple "UPDATE table SET i=i+1 WHERE i=3" doesn't work, because it violates the uniqueness constraint on 'i', and also it updates the "howdy" row unnecessarily.) Or, is there a better way to represent the ordered list? I suppose you could make 'i' a floating point value and choose values between, but then you have to have a separate rebalancing operation when no such value exists. Or, is there some standard algorithm for generating string values between arbitrary other strings, if I were to make 'i' a varchar? Or should I just represent it as a linked list? I was avoiding that because I'd like to also be able to do a SELECT .. ORDER BY to get all the elements in order.

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • How can I create a SQL table using excel columns?

    - by Phsika
    I need to help to generate column name from excel automatically. I think that: we can do below codes: CREATE TABLE [dbo].[Addresses_Temp] ( [FirstName] VARCHAR(20), [LastName] VARCHAR(20), [Address] VARCHAR(50), [City] VARCHAR(30), [State] VARCHAR(2), [ZIP] VARCHAR(10) ) via C#. How can I learn column name from Excel? private void Form1_Load(object sender, EventArgs e) { ExcelToSql(); } void ExcelToSql() { string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Source\MPD.xlsm;Extended Properties=""Excel 12.0;HDR=YES;"""; // if you don't want to show the header row (first row) // use 'HDR=NO' in the string string strSQL = "SELECT * FROM [Sheet1$]"; OleDbConnection excelConnection = new OleDbConnection(connectionString); excelConnection.Open(); // This code will open excel file. OleDbCommand dbCommand = new OleDbCommand(strSQL, excelConnection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(dbCommand); // create data table DataTable dTable = new DataTable(); dataAdapter.Fill(dTable); // bind the datasource // dataBingingSrc.DataSource = dTable; // assign the dataBindingSrc to the DataGridView // dgvExcelList.DataSource = dataBingingSrc; // dispose used objects if (dTable.Rows.Count > 0) MessageBox.Show("Count:" + dTable.Rows.Count.ToString()); dTable.Dispose(); dataAdapter.Dispose(); dbCommand.Dispose(); excelConnection.Close(); excelConnection.Dispose(); }

    Read the article

  • Core Data, Bindings, value transformers : crash when saving

    - by Gael
    Hi, I am trying to store a PNG image in a core data store backed by an sqlite database. Since I intend to use this database on an iPhone I can't store NSImage objects directly. I wanted to use bindings and an NSValueTransformer subclass to handle the transcoding from the NSImage (obtained by an Image well on my GUI) to an NSData containing the PNG binary representation of the image. I wrote the following code for the ValueTransformer : + (Class)transformedValueClass { return [NSImage class]; } + (BOOL)allowsReverseTransformation { return YES; } - (id)transformedValue:(id)value { if (value == nil) return nil; return [[[NSImage alloc] initWithData:value] autorelease]; } - (id)reverseTransformedValue:(id)value { if (value == nil) return nil; if(![value isKindOfClass:[NSImage class]]) { NSLog(@"Type mismatch. Expecting NSImage"); } NSBitmapImageRep *bits = [[value representations] objectAtIndex: 0]; NSData *data = [bits representationUsingType:NSPNGFileType properties:nil]; return data; } The model has a transformable property configured with this NSValueTransformer. In Interface Builder a table column and an image well are both bound to this property and both have the proper value transformer name (an image dropped in the image well shows up in the table column). The transformer is registered and called every time an image is added or a row is reloaded (checked with NSLog() calls). The problem arises when I am trying to save the managed objects. The console output shows the error message : [NSImage length]: unrecognized selector sent to instance 0x1004933a0 It seems like core data is using the value transformer to obtain the NSImage back from the NSData and then tries to save the NSImage instead of the NSData. There are probably workarounds such as the one presented in this post but I would really like to understand why my approach is flawn. Thanks in advance for your ideas and explanations.

    Read the article

  • Help on understanding multiple columns on an index?

    - by Xaisoft
    Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?

    Read the article

  • Silverlight datagrid fails to display data.

    - by Jekke
    I have a datagrid defined in my project's XAML: <data:DataGrid IsReadOnly="True" Grid.Row="1" Grid.Column="1" x:Name="gridOfferings" Margin="10,10,10,10" AutoGenerateColumns="False"> <data:DataGrid.Columns> <data:DataGridTextColumn Binding="{Binding Trader}" DisplayIndex="0" Header="Trader" Width="Auto" FontSize="11"/> <data:DataGridTextColumn Binding="{Binding Product}" DisplayIndex="1" Header="Product" Width="Auto" FontSize="11"/> </data:DataGrid.Columns> </data:DataGrid> I bind it to a List< of custom objects: public MainPage() { InitializeComponent(); _Rows = new List<OfferingRowData>(); _Rows.Add(new OfferingRowData() { Trader = "Kameilya Loenstein", Product = "American Consolidated AAA", Price = 24.95, OfferingMade = DateTime.Now }); _Rows.Add(new OfferingRowData() { Trader = "Bill Foobar", Product = "IBM Mid-Atlantic Exotic", Price = 204.90, OfferingMade = DateTime.Now.AddMinutes(-3) }); gridOfferings.ItemsSource = _Rows; } When it shows up on the page, the column headers appear, but none of the data does. What am I doing wrong?

    Read the article

  • Delete throws "deleted object would be re-saved by cascade"

    - by Greg
    I have following model: <class name="Person" table="Person" optimistic-lock="version"> <id name="Id" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <!-- plus some properties here --> </class> <class name="Event" table="Event" optimistic-lock="version"> <id name="Id" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <!-- plus some properties here --> </class> <class name="PersonEventRegistration" table="PersonEventRegistration" optimistic-lock="version"> <id name="Id" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <property name="IsComplete" type="Boolean" not-null="true" /> <property name="RegistrationDate" type="DateTime" not-null="true" /> <many-to-one name="Person" class="Person" column="PersonId" foreign-key="FK_PersonEvent_PersonId" cascade="all-delete-orphan" /> <many-to-one name="Event" class="Event" column="EventId" foreign-key="FK_PersonEvent_EventId" cascade="all-delete-orphan" /> </class> There are no properties pointing to PersonEventRegistration either in Person nor in Event. When I try to delete an entry from PersonEventRegistration, I get the following error: "deleted object would be re-saved by cascade" The problem is, I don't store this object in any other collection - the delete code looks like this: public bool UnregisterFromEvent(Person person, Event entry) { var registrationEntry = this.session .CreateCriteria<PersonEventRegistration>() .Add(Restrictions.Eq("Person", person)) .Add(Restrictions.Eq("Event", entry)) .Add(Restrictions.Eq("IsComplete", false)) .UniqueResult<PersonEventRegistration>(); bool result = false; if (null != registrationEntry) { using (ITransaction tx = this.session.BeginTransaction()) { this.session.Delete(registrationEntry); tx.Commit(); result = true; } } return result; } What am I doing wrong here?

    Read the article

  • Is the order of params important in NHibernate?

    - by Blake Blackwell
    If I have an int parameter followed by a string parameter in a sproc I get the following error: Input string was not in the correct format However, if I switch those parameters in the sproc than I get the result set I expect. Are params sorted by data type, or do I have to do anything special in my config file? I've included my code for reference: Config File <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="NHibernateDemo" namespace="NHibernateDemo.Domain"> <class name="Blake_Test" table="Blake_Test"> <id name="TestId" column="TESTID"></id> <property name="TestName" column="TESTNAME" /> <loader query-ref="GetBlakeTest"/> </class> <sql-query name="GetBlakeTest" callable="true"> <return class="Blake_Test" /> call procedure AREA51.NHIBERNATE_TEST.GetBlakeTest(:int_TestId, :vch_TestName) </sql-query> </hibernate-mapping> Sproc Code: PROCEDURE GetBlakeTest ( ret_cursor OUT SYS_REFCURSOR, int_testid integer, vch_testname varchar2 ) AS BEGIN OPEN ret_cursor FOR SELECT TestId, TestName FROM blake_test WHERE testid = int_testid ORDER BY TestName DESC; END GetBlakeTest; END NHIBERNATE_TEST; Executing Code: IQuery query1 = session.GetNamedQuery( "GetBlakeTest" ); query1.SetParameter( "int_TestId", 1 ); query1.SetParameter( "vch_TestName", "TEST" ); IList<Blake_Test> mystuff = query1.List<Blake_Test>();

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • No JSON object could be decoded - RPC POST call

    - by user1307067
    var body = JSON.stringify(params); // Create an XMLHttpRequest 'POST' request w/ an optional callback handler req.open('POST', '/rpc', async); req.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); req.setRequestHeader("Content-length", body.length); req.setRequestHeader("Connection", "close"); if (async) { req.onreadystatechange = function() { if(req.readyState == 4 && req.status == 200) { var response = null; try { response = JSON.parse(req.responseText); } catch (e) { response = req.responseText; } callback(response); } }; } // Make the actual request req.send(body); ---- on the server side ---- class RPCHandler(BaseHandler): '''@user_required''' def post(self): RPCmethods = ("UpdateScenario", "DeleteScenario") logging.info(u'body ' + self.request.body) args = simplejson.loads(self.request.body) ---- Get the following error on the server logs body %5B%22UpdateScenario%22%2C%22c%22%2C%224.5%22%2C%2230frm%22%2C%22Refinance%22%2C%22100000%22%2C%22740%22%2C%2294538%22%2C%2250000%22%2C%22owner%22%2C%22sfr%22%2C%22Fremont%22%2C%22CA%22%5D= No JSON object could be decoded: line 1 column 0 (char 0): Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in call handler.post(*groups) File "/base/data/home/apps/s~mortgageratealert-staging/1.357912751535215625/main.py", line 418, in post args = json.loads(self.request.body) File "/base/python_runtime/python_lib/versions/1/simplejson/init.py", line 388, in loads return _default_decoder.decode(s) File "/base/python_runtime/python_lib/versions/1/simplejson/decoder.py", line 402, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/base/python_runtime/python_lib/versions/1/simplejson/decoder.py", line 420, in raw_decode raise JSONDecodeError("No JSON object could be decoded", s, idx) JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0) --- firebug shows the following --- Parameters application/x-www-form-urlencoded ["UpdateScenario","c","4.... Source ["UpdateScenario","c","4.5","30frm","Refinance","100000","740","94538","50000","owner","sfr","Fremont","CA"] Based on the firebug report and also the logs shows self.request.body as anticipated. However simplejson load doesn't like it. Please help!

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Word-wrap grid cells in Ext JS

    - by richardtallent
    (This is not a question per se, I'm documenting a solution I found using Ext JS 3.1.0. But, feel free to answer if you know of a better solution!) The Column config for an Ext JS Grid object does not have a native way to allow word-wrapped text, but there is a css property to override the inline CSS of the TD elements created by the grid. Unfortunately, the TD elements contain a DIV element wrapping the content, and that DIV is set to white-space:nowrap by Ext JS's stylesheet, so overriding the TD CSS does no good. I added the following to my main CSS file, a simple fix that appears to not break any grid functionality, but allows any white-space setting I apply to the TD to pass through to the DIV. .x-grid3-cell { /* TD is defaulted to word-wrap. Turn it off so it can be turned on for specific columns. */ white-space:nowrap; } .x-grid3-cell-inner { /* Inherit DIV's white-space from TD parent, since DIV's inline style is not accessible in the column definition. */ white-space:inherit; } YMMV, but it works for me, wanted to get it out there as a solution since I couldn't find a working solution by searching the Interwebs.

    Read the article

  • SQL select row into a string variable without knowing columns

    - by Brandi
    Hello, I am new to writing SQL and would greatly appreciate help on this problem. :) I am trying to select an entire row into a string, preferably separated by a space or a comma. I would like to accomplish this in a generic way, without having to know specifics about the columns in the tables. What I would love to do is this: DECLARE @MyStringVar NVARCHAR(MAX) = '' @MyStringVar = SELECT * FROM MyTable WHERE ID = @ID AS STRING But what I ended up doing was this: DECLARE @MyStringVar = '' DECLARE @SecificField1 INT DECLARE @SpecificField2 NVARCHAR(255) DECLARE @SpecificField3 NVARCHAR(1000) ... SELECT @SpecificField1 = Field1, @SpecificField2 = Field2, @SpecificField3 = Field3 FROM MyTable WHERE ID = @ID SELECT @StringBuilder = @StringBuilder + CONVERT(nvarchar(10), @Field1) + ' ' + @Field2 + ' ' + @Field3 Yuck. :( I have seen some people post stuff about the COALESCE function, but again, I haven't seen anyone use it without specific column names. Also, I was thinking, perhaps there is a way to use the column names dynamically getting them by: SELECT [COLUMN_NAME] FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' It really doesn't seem like this should be so complicated. :( What I did works for now, but thanks ahead of time to anyone who can point me to a better solution. :) EDIT: Got it fixed, thanks to everyone who answered. :)

    Read the article

  • Trying to insert a row using stored procedured with a parameter binded to an expression.

    - by Arvind Singh
    Environment: asp.net 3.5 (C# and VB) , Ms-sql server 2005 express Tables Table:tableUser ID (primary key) username Table:userSchedule ID (primary key) thecreator (foreign key = tableUser.ID) other fields I have created a procedure that accepts a parameter username and gets the userid and inserts a row in Table:userSchedule Problem: Using stored procedure with datalist control to only fetch data from the database by passing the current username using statement below works fine protected void SqlDataSourceGetUserID_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CurrentUserName"].Value = Context.User.Identity.Name; } But while inserting using DetailsView it shows error Procedure or function OASNewSchedule has too many arguments specified. I did use protected void SqlDataSourceCreateNewSchedule_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CreatedBy"].Value = Context.User.Identity.Name; } DetailsView properties: autogen fields: off, default mode: insert, it shows all the fields that may not be expected by the procedure like ID (primary key) not required in procedure and CreatedBy (user id ) field . So I tried removing the 2 fields from detailsview and shows error Cannot insert the value NULL into column 'CreatedBy', table 'D:\OAS\OAS\APP_DATA\ASPNETDB.MDF.dbo.OASTest'; column does not allow nulls. INSERT fails. The statement has been terminated. For some reason parameters value is not being set. Can anybody bother to understand this and help?

    Read the article

  • Doctrine Default Primary Key Problem (Again)

    - by 01010011
    Hi, Should I change all of my uniquely-named MySQL database primary keys to 'id' to avoid getting errors related to Doctrine's default primary key set in the plugin 'doctrine_pi.php'? To further elaborate, I am getting the following reoccurring error, this time after trying to login to my login page: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'u.book_id' in 'field list'' in... I suspect the problem resides at a MySQL table used for my login, of which has a primary key called id Marc B originally solved an identical problem for me in this post http://stackoverflow.com/questions/2702229/doctrine-codeigniter-mysql-crud-errors when I had the same problem with a different table within the same database. Following his suggestion, I changed the default primary key located at system/application/plugins/doctrine_pi.php from 'id' to 'book_id': <?php // system/application/plugins/doctrine_pi.php ... // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'book_id', 'type' => 'integer', 'length' => 4)); and that solved my previous problem. However, my login page stopped working. So what is the safe thing to do? Change all of my primary keys to 'id' (will that solve the problem without causing some other problem I am not aware of). Or should I add some lines of code in doctrine_pi.php?

    Read the article

  • Common one-to-many table for multiple entities

    - by Ben V
    Suppose I have two tables, Customer and Vendor. I want to have a common address table for customer and vendor addresses. Customers and Vendors can both have one to many addresses. Option 1 Add columns for the AddressID to the Customer and Vendor tables. This just doesn't seem like a clean solution to me. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID AddressID1 AddressID1 Street AddressID2 AddressID2 City... Option 2 Move the foreign key to the Address table. For a Customer, Address.CustomerID will be populated. For a Vendor, Address.VendorID will be populated. I don't like this either - I shouldn't need to modify the address table every time I want to use it for another entity. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID CustomerID VendorID Option 3 I've also seen this - only 1 foreign key column on the Address table with another column to identify which foreign key table the address belongs to. I don't like this one because it requires all the foreign key tables to have the same type of ID. It also seems messy once you start coding against it. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID FKTable FKID So, am I just too picky, or is there something I haven't thought of?

    Read the article

  • Parsing Strings ( .crt files )

    - by user1661521
    Base Knowledge : I have a .crt file ( certification authoritie file ) and he is composed of many fields but in one line that resumes this question i have this : Certificate: ...(alot of stuff before)... Subject: C=US, ST=Maryland, L=Pasadena, O=Brent Baccala, OU=FreeSoft, CN=www.freesoft.org/[email protected] Subject Public Key Info: ...(alot of stuff after) and i need to parse the file to populate a .csv file and i have that done the problem that i need help is, i need to get the field: CN=www.fresoft.org but when i get this kind of CN=...(Value instead of the ...) with alot of slashes i get a error in the parsing like the raw string is: CN=foo/bar/the/hell/emailAddress=blablabla and i need only: foo/bar/the/hell and for a moment i got that in the correct column but when i dont have the emailAddress something just fail in my parsing and i then get in my CN .csv column the information wrong instead of |CN| foo/bar/the/hell i get: |CN| OU=FreeSoft, foo/bar/the/hell. I have this code doing the CN parsing: #!/bin/bash subject_line=$(echo $cert | grep -o "Subject:.*Subject Public Key Info") cn=$(echo $subject_line | grep -o "CN=.*" ) if [ $(echo $cn | grep -c ".*email.*") -gt 0 ]; then end_cn=$(echo $cn | grep -b -o emailAddress) end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-4} echo $common_name else end_cn=$(echo $cn | grep -b -o "Subject Public Key Info") end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-5} echo $common_name fi

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Strange Behaviour in DataGrid SilverLight 3 Control

    - by Asim Sajjad
    Here is the my Xaml Code. Here I have changed the Foreground of the cell depending on the current age of the person. <data:DataGridTemplateColumn Header="First Name" Width="150" MinWidth="150" CanUserReorder="False" SortMemberPath="FirstName"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate > <TextBlock Foreground ="{Binding Path=DateOfBirth,Mode=OneWay,Converter={StaticResource CellColor}}" Text="{Binding FirstName}" ToolTipService.ToolTip="{Binding FirstName}" FontFamily="Arial" FontSize="11" VerticalAlignment="Center" Margin="5,0,0,0" /> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Foreground ="{Binding Path=DateOfBirth,Mode=OneWay,Converter={StaticResource CellColor}}" Header="Last Name" Width="150" MinWidth="150" Binding="{Binding LastName}" CanUserSort="True" IsReadOnly="True" CanUserReorder="False"/> When I run the above code it return following Exception AG_E_PARSER_BAD_PROPERTY_VALU My question is that when I remove the Foreground converter from the DataGridTextColumn Column it runs fine As the Foreground converter is applied in the DataGridTemplateColumn Column which will not through excepction. But when I used same Converter to the DataGridTextColumn it throw execption why, can anyone know why is th is behaviour thanks in advance.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • PHP - Tricky... array into columns, but in a specific order.

    - by Joe
    <?php $combinedArray = array("apple","banana","watermelon","lemon","orange","mango"); $num_cols = 3; $i = 0; foreach ($combinedArray as $r ){ /*** use modulo to check if the row should end ***/ echo $i++%$num_cols==0 ? '<div style="clear:both;"></div>' : ''; /*** output the array item ***/ ?> <div style="float:left; width:33%;"> <?php echo $r; ?> </div> <?php } ?> <div style="clear:both;"></div> The above code will print out the array like this: apple --- banana --- watermelon lemon --- orange --- mango However, I need it like this: apple --- watermelon --- orange banana --- lemon --- mango Do you know how to convert this? Basically, each value in the array needs to be placed underneath the one above, but it must be based on this same structure of 3 columns, and also an equal amount of fruits per column/row (unless there was like 7 fruits there would be 3 in one column and 2 in the other columns. Sorry I know it's confusing lol

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • Ruby on Rails - Primary and Foreign key

    - by Eef
    Hey, I am creating a site in Ruby on Rails, I have two models a User model and a Transaction model. These models both belong to an account so they both have a field called account_id I am trying to setup a association between them like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user end I am using these associations like so: user = User.find(1) transactions = user.transactions At the moment the application is trying to find the transactions with the user_id, here is the SQL it generates: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 1) This is incorrect as I would like the find the transactions via the account_id, I have tried setting the associations like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions, :primary_key => :account_id, :class_name => "Transaction" end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user, :foreign_key => :account_id, :class_name => "User" end This almost achieves what I am looking to do and generates the following SQL: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 104) The number 104 is the correct account_id but it is still trying to query the transaction table for a user_id field. Could someone give me some advice on how I setup the associations to query the transaction table for the account_id instead of the user_id resulting in a SQL query like so: SELECT * FROM `transactions` WHERE (`transactions`.account_id = 104) Cheers Eef

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >