Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 245/489 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • How can I create a SQL table using excel columns?

    - by Phsika
    I need to help to generate column name from excel automatically. I think that: we can do below codes: CREATE TABLE [dbo].[Addresses_Temp] ( [FirstName] VARCHAR(20), [LastName] VARCHAR(20), [Address] VARCHAR(50), [City] VARCHAR(30), [State] VARCHAR(2), [ZIP] VARCHAR(10) ) via C#. How can I learn column name from Excel? private void Form1_Load(object sender, EventArgs e) { ExcelToSql(); } void ExcelToSql() { string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Source\MPD.xlsm;Extended Properties=""Excel 12.0;HDR=YES;"""; // if you don't want to show the header row (first row) // use 'HDR=NO' in the string string strSQL = "SELECT * FROM [Sheet1$]"; OleDbConnection excelConnection = new OleDbConnection(connectionString); excelConnection.Open(); // This code will open excel file. OleDbCommand dbCommand = new OleDbCommand(strSQL, excelConnection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(dbCommand); // create data table DataTable dTable = new DataTable(); dataAdapter.Fill(dTable); // bind the datasource // dataBingingSrc.DataSource = dTable; // assign the dataBindingSrc to the DataGridView // dgvExcelList.DataSource = dataBingingSrc; // dispose used objects if (dTable.Rows.Count > 0) MessageBox.Show("Count:" + dTable.Rows.Count.ToString()); dTable.Dispose(); dataAdapter.Dispose(); dbCommand.Dispose(); excelConnection.Close(); excelConnection.Dispose(); }

    Read the article

  • Is the order of params important in NHibernate?

    - by Blake Blackwell
    If I have an int parameter followed by a string parameter in a sproc I get the following error: Input string was not in the correct format However, if I switch those parameters in the sproc than I get the result set I expect. Are params sorted by data type, or do I have to do anything special in my config file? I've included my code for reference: Config File <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="NHibernateDemo" namespace="NHibernateDemo.Domain"> <class name="Blake_Test" table="Blake_Test"> <id name="TestId" column="TESTID"></id> <property name="TestName" column="TESTNAME" /> <loader query-ref="GetBlakeTest"/> </class> <sql-query name="GetBlakeTest" callable="true"> <return class="Blake_Test" /> call procedure AREA51.NHIBERNATE_TEST.GetBlakeTest(:int_TestId, :vch_TestName) </sql-query> </hibernate-mapping> Sproc Code: PROCEDURE GetBlakeTest ( ret_cursor OUT SYS_REFCURSOR, int_testid integer, vch_testname varchar2 ) AS BEGIN OPEN ret_cursor FOR SELECT TestId, TestName FROM blake_test WHERE testid = int_testid ORDER BY TestName DESC; END GetBlakeTest; END NHIBERNATE_TEST; Executing Code: IQuery query1 = session.GetNamedQuery( "GetBlakeTest" ); query1.SetParameter( "int_TestId", 1 ); query1.SetParameter( "vch_TestName", "TEST" ); IList<Blake_Test> mystuff = query1.List<Blake_Test>();

    Read the article

  • Mapping composite foreign keys in a many-many relationship, with overlapping components.

    - by Kirk Broadhurst
    I have a Page table and a View table. There is a many-many relationship between these two via a PageView table. Unfortunately all of these tables need to have composite keys (for business reasons). Page has a primary key of (PageCode, Version), View has a primary key of (ViewCode, Version). PageView obviously enough has PageCode, ViewCode, and Version. The FK to Page is (PageCode, Version) and the FK to View is (ViewCode, Version) Makes sense and works, but when I try to map this in Entity framework I get Error 3021: Problem in mapping fragments...: Each of the following columns in table PageView is mapped to multiple conceptual side properties: PageView.Version is mapped to (PageView_Association.View.Version, PageView_Association.Page.Version) So clearly enough, EF is having a complain about the Version column being a common component of the two foreign keys. Obviously I could create a PageVersion and ViewVersion column in the join table, but that kind of defeats the point of the constraint, i.e. the Page and View must have the same Version value. Has anyone encountered this, and is there anything I can do get around it? Thanks!

    Read the article

  • Help on understanding multiple columns on an index?

    - by Xaisoft
    Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Word-wrap grid cells in Ext JS

    - by richardtallent
    (This is not a question per se, I'm documenting a solution I found using Ext JS 3.1.0. But, feel free to answer if you know of a better solution!) The Column config for an Ext JS Grid object does not have a native way to allow word-wrapped text, but there is a css property to override the inline CSS of the TD elements created by the grid. Unfortunately, the TD elements contain a DIV element wrapping the content, and that DIV is set to white-space:nowrap by Ext JS's stylesheet, so overriding the TD CSS does no good. I added the following to my main CSS file, a simple fix that appears to not break any grid functionality, but allows any white-space setting I apply to the TD to pass through to the DIV. .x-grid3-cell { /* TD is defaulted to word-wrap. Turn it off so it can be turned on for specific columns. */ white-space:nowrap; } .x-grid3-cell-inner { /* Inherit DIV's white-space from TD parent, since DIV's inline style is not accessible in the column definition. */ white-space:inherit; } YMMV, but it works for me, wanted to get it out there as a solution since I couldn't find a working solution by searching the Interwebs.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • No JSON object could be decoded - RPC POST call

    - by user1307067
    var body = JSON.stringify(params); // Create an XMLHttpRequest 'POST' request w/ an optional callback handler req.open('POST', '/rpc', async); req.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); req.setRequestHeader("Content-length", body.length); req.setRequestHeader("Connection", "close"); if (async) { req.onreadystatechange = function() { if(req.readyState == 4 && req.status == 200) { var response = null; try { response = JSON.parse(req.responseText); } catch (e) { response = req.responseText; } callback(response); } }; } // Make the actual request req.send(body); ---- on the server side ---- class RPCHandler(BaseHandler): '''@user_required''' def post(self): RPCmethods = ("UpdateScenario", "DeleteScenario") logging.info(u'body ' + self.request.body) args = simplejson.loads(self.request.body) ---- Get the following error on the server logs body %5B%22UpdateScenario%22%2C%22c%22%2C%224.5%22%2C%2230frm%22%2C%22Refinance%22%2C%22100000%22%2C%22740%22%2C%2294538%22%2C%2250000%22%2C%22owner%22%2C%22sfr%22%2C%22Fremont%22%2C%22CA%22%5D= No JSON object could be decoded: line 1 column 0 (char 0): Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in call handler.post(*groups) File "/base/data/home/apps/s~mortgageratealert-staging/1.357912751535215625/main.py", line 418, in post args = json.loads(self.request.body) File "/base/python_runtime/python_lib/versions/1/simplejson/init.py", line 388, in loads return _default_decoder.decode(s) File "/base/python_runtime/python_lib/versions/1/simplejson/decoder.py", line 402, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/base/python_runtime/python_lib/versions/1/simplejson/decoder.py", line 420, in raw_decode raise JSONDecodeError("No JSON object could be decoded", s, idx) JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0) --- firebug shows the following --- Parameters application/x-www-form-urlencoded ["UpdateScenario","c","4.... Source ["UpdateScenario","c","4.5","30frm","Refinance","100000","740","94538","50000","owner","sfr","Fremont","CA"] Based on the firebug report and also the logs shows self.request.body as anticipated. However simplejson load doesn't like it. Please help!

    Read the article

  • Parsing Strings ( .crt files )

    - by user1661521
    Base Knowledge : I have a .crt file ( certification authoritie file ) and he is composed of many fields but in one line that resumes this question i have this : Certificate: ...(alot of stuff before)... Subject: C=US, ST=Maryland, L=Pasadena, O=Brent Baccala, OU=FreeSoft, CN=www.freesoft.org/[email protected] Subject Public Key Info: ...(alot of stuff after) and i need to parse the file to populate a .csv file and i have that done the problem that i need help is, i need to get the field: CN=www.fresoft.org but when i get this kind of CN=...(Value instead of the ...) with alot of slashes i get a error in the parsing like the raw string is: CN=foo/bar/the/hell/emailAddress=blablabla and i need only: foo/bar/the/hell and for a moment i got that in the correct column but when i dont have the emailAddress something just fail in my parsing and i then get in my CN .csv column the information wrong instead of |CN| foo/bar/the/hell i get: |CN| OU=FreeSoft, foo/bar/the/hell. I have this code doing the CN parsing: #!/bin/bash subject_line=$(echo $cert | grep -o "Subject:.*Subject Public Key Info") cn=$(echo $subject_line | grep -o "CN=.*" ) if [ $(echo $cn | grep -c ".*email.*") -gt 0 ]; then end_cn=$(echo $cn | grep -b -o emailAddress) end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-4} echo $common_name else end_cn=$(echo $cn | grep -b -o "Subject Public Key Info") end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-5} echo $common_name fi

    Read the article

  • Doctrine Default Primary Key Problem (Again)

    - by 01010011
    Hi, Should I change all of my uniquely-named MySQL database primary keys to 'id' to avoid getting errors related to Doctrine's default primary key set in the plugin 'doctrine_pi.php'? To further elaborate, I am getting the following reoccurring error, this time after trying to login to my login page: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'u.book_id' in 'field list'' in... I suspect the problem resides at a MySQL table used for my login, of which has a primary key called id Marc B originally solved an identical problem for me in this post http://stackoverflow.com/questions/2702229/doctrine-codeigniter-mysql-crud-errors when I had the same problem with a different table within the same database. Following his suggestion, I changed the default primary key located at system/application/plugins/doctrine_pi.php from 'id' to 'book_id': <?php // system/application/plugins/doctrine_pi.php ... // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'book_id', 'type' => 'integer', 'length' => 4)); and that solved my previous problem. However, my login page stopped working. So what is the safe thing to do? Change all of my primary keys to 'id' (will that solve the problem without causing some other problem I am not aware of). Or should I add some lines of code in doctrine_pi.php?

    Read the article

  • PHP - Tricky... array into columns, but in a specific order.

    - by Joe
    <?php $combinedArray = array("apple","banana","watermelon","lemon","orange","mango"); $num_cols = 3; $i = 0; foreach ($combinedArray as $r ){ /*** use modulo to check if the row should end ***/ echo $i++%$num_cols==0 ? '<div style="clear:both;"></div>' : ''; /*** output the array item ***/ ?> <div style="float:left; width:33%;"> <?php echo $r; ?> </div> <?php } ?> <div style="clear:both;"></div> The above code will print out the array like this: apple --- banana --- watermelon lemon --- orange --- mango However, I need it like this: apple --- watermelon --- orange banana --- lemon --- mango Do you know how to convert this? Basically, each value in the array needs to be placed underneath the one above, but it must be based on this same structure of 3 columns, and also an equal amount of fruits per column/row (unless there was like 7 fruits there would be 3 in one column and 2 in the other columns. Sorry I know it's confusing lol

    Read the article

  • SQL select row into a string variable without knowing columns

    - by Brandi
    Hello, I am new to writing SQL and would greatly appreciate help on this problem. :) I am trying to select an entire row into a string, preferably separated by a space or a comma. I would like to accomplish this in a generic way, without having to know specifics about the columns in the tables. What I would love to do is this: DECLARE @MyStringVar NVARCHAR(MAX) = '' @MyStringVar = SELECT * FROM MyTable WHERE ID = @ID AS STRING But what I ended up doing was this: DECLARE @MyStringVar = '' DECLARE @SecificField1 INT DECLARE @SpecificField2 NVARCHAR(255) DECLARE @SpecificField3 NVARCHAR(1000) ... SELECT @SpecificField1 = Field1, @SpecificField2 = Field2, @SpecificField3 = Field3 FROM MyTable WHERE ID = @ID SELECT @StringBuilder = @StringBuilder + CONVERT(nvarchar(10), @Field1) + ' ' + @Field2 + ' ' + @Field3 Yuck. :( I have seen some people post stuff about the COALESCE function, but again, I haven't seen anyone use it without specific column names. Also, I was thinking, perhaps there is a way to use the column names dynamically getting them by: SELECT [COLUMN_NAME] FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' It really doesn't seem like this should be so complicated. :( What I did works for now, but thanks ahead of time to anyone who can point me to a better solution. :) EDIT: Got it fixed, thanks to everyone who answered. :)

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • Restart list numbering in word for each new list created

    - by Feena
    Hi, I am exporting content from a jsp page into MS Word using javascript. When the user is in Word there is a table with 10 rows and 2 columns, A & B. The user creates an ordered list in row 1, column A like this: 1 dog 2 cat 3 mouse if the user then creates a second list in row 1 column B is turns out like this: 4 car 5 truck 6 bike instead of: 1 car 2 truck 3 bike Word is set up to continue the numbering in lists from prior lists automatically. I know this can be reset easily but the users dont want to have to do this. They want the numbering of any potential lists created to restarted at 1. when the document is exported into Word and opened in front of them. So this must be set up in the javasctipt code or using a style or something prior to getting into Word. This is what I dont know how to do. Any help is much appreciated. Thanks, Feena.

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • SWT - Table Row - Changing font color

    - by jkteater
    Is it possible to change the font color for a row based on a value in one of the columns? My table has a column that displays a status. The value of the column is going to either be Failed or Success. If it is Success I would like for that rows font be green. If the status equals Failed, I want that rows font be red. Is this possible, if so where would I put the logic. EDIT Here is my Table Viewer code, I am not going to show all the columns, just a couple private void createColumns() { String[] titles = { "ItemId", "RevId", "PRL", "Dataset Name", "Printer/Profile" , "Success/Fail" }; int[] bounds = { 100, 75, 75, 150, 200, 100 }; TableViewerColumn col = createTableViewerColumn(titles[0], bounds[0], 0); col.setLabelProvider(new ColumnLabelProvider() { public String getText(Object element) { if(element instanceof AplotResultsDataModel.ResultsData) { return ((AplotResultsDataModel.ResultsData)element).getItemId(); } return super.getText(element); } }); col = createTableViewerColumn(titles[1], bounds[1], 1); col.setLabelProvider(new ColumnLabelProvider() { public String getText(Object element) { if(element instanceof AplotResultsDataModel.ResultsData) { return ((AplotResultsDataModel.ResultsData)element).getRevId(); } return super.getText(element); } }); --ETC

    Read the article

  • Common one-to-many table for multiple entities

    - by Ben V
    Suppose I have two tables, Customer and Vendor. I want to have a common address table for customer and vendor addresses. Customers and Vendors can both have one to many addresses. Option 1 Add columns for the AddressID to the Customer and Vendor tables. This just doesn't seem like a clean solution to me. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID AddressID1 AddressID1 Street AddressID2 AddressID2 City... Option 2 Move the foreign key to the Address table. For a Customer, Address.CustomerID will be populated. For a Vendor, Address.VendorID will be populated. I don't like this either - I shouldn't need to modify the address table every time I want to use it for another entity. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID CustomerID VendorID Option 3 I've also seen this - only 1 foreign key column on the Address table with another column to identify which foreign key table the address belongs to. I don't like this one because it requires all the foreign key tables to have the same type of ID. It also seems messy once you start coding against it. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID FKTable FKID So, am I just too picky, or is there something I haven't thought of?

    Read the article

  • Strange Behaviour in DataGrid SilverLight 3 Control

    - by Asim Sajjad
    Here is the my Xaml Code. Here I have changed the Foreground of the cell depending on the current age of the person. <data:DataGridTemplateColumn Header="First Name" Width="150" MinWidth="150" CanUserReorder="False" SortMemberPath="FirstName"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate > <TextBlock Foreground ="{Binding Path=DateOfBirth,Mode=OneWay,Converter={StaticResource CellColor}}" Text="{Binding FirstName}" ToolTipService.ToolTip="{Binding FirstName}" FontFamily="Arial" FontSize="11" VerticalAlignment="Center" Margin="5,0,0,0" /> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Foreground ="{Binding Path=DateOfBirth,Mode=OneWay,Converter={StaticResource CellColor}}" Header="Last Name" Width="150" MinWidth="150" Binding="{Binding LastName}" CanUserSort="True" IsReadOnly="True" CanUserReorder="False"/> When I run the above code it return following Exception AG_E_PARSER_BAD_PROPERTY_VALU My question is that when I remove the Foreground converter from the DataGridTextColumn Column it runs fine As the Foreground converter is applied in the DataGridTemplateColumn Column which will not through excepction. But when I used same Converter to the DataGridTextColumn it throw execption why, can anyone know why is th is behaviour thanks in advance.

    Read the article

  • ul sortables get serialized data

    - by russp
    Hi folks Any ideas how to get the serilized data from this function? the html is first with the JQuery function last, I cannot get the serialization to appear in the alert. When I can do that I can finish this by sending via ajax etc... <div class="column" id="col1"> <div class="portlet"> <div class="portlet-header">Feeds</div> <div class="portlet-content">Lorem ipsum dolor sit amet, consectetuer adipiscing elit</div> </div> <div class="portlet"> <div class="portlet-header">News</div> <div class="portlet-content">Lorem ipsum dolor sit amet, consectetuer adipiscing elit</div> </div> Shopping Lorem ipsum dolor sit amet, consectetuer adipiscing elit Links Lorem ipsum dolor sit amet, consectetuer adipiscing elit Images Lorem ipsum dolor sit amet, consectetuer adipiscing elit $(function() { $("#col1, #col2, #col3").sortable({ connectWith: '.column', receive : function () { serial = $('#col1').sortable('serialize'); //serial2 = $('#col2').sortable('serialize'); //serial3 = $('#col3').sortable('serialize'); alert(serial); } }); });

    Read the article

  • Ruby on Rails - Primary and Foreign key

    - by Eef
    Hey, I am creating a site in Ruby on Rails, I have two models a User model and a Transaction model. These models both belong to an account so they both have a field called account_id I am trying to setup a association between them like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user end I am using these associations like so: user = User.find(1) transactions = user.transactions At the moment the application is trying to find the transactions with the user_id, here is the SQL it generates: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 1) This is incorrect as I would like the find the transactions via the account_id, I have tried setting the associations like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions, :primary_key => :account_id, :class_name => "Transaction" end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user, :foreign_key => :account_id, :class_name => "User" end This almost achieves what I am looking to do and generates the following SQL: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 104) The number 104 is the correct account_id but it is still trying to query the transaction table for a user_id field. Could someone give me some advice on how I setup the associations to query the transaction table for the account_id instead of the user_id resulting in a SQL query like so: SELECT * FROM `transactions` WHERE (`transactions`.account_id = 104) Cheers Eef

    Read the article

  • Query to MySQL from c# returns System.Byte[]

    - by karthik
    I am using the below SP to return the value of Generated Insert statement and it works fine when executed in Query browser. When i try to get the value from C#, it give's me "System.Byte[]" as return value. When i try to get the value from MySql query browser, it give's me return value as : 'insert into admindb.accounts values("54321","2","karthik2","karthik2","1");' I guess the problem is with the single quotes of the returned value. Is it so ? DELIMITER $$ DROP PROCEDURE IF EXISTS `admindb`.`InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') as MyColumn from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Weird splitter behaviour when moving it

    - by tomo
    My demo app displays two rectangles which should fill whole browser's screen. There is a vertical splitter between them. This looks like a basic scenario but I have no idea how to implement this in xaml. I cannot force this to fill whole screen and when moving splitter then whole screen grows. Can anybody help? <UserControl xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls" x:Class="SilverlightApplication1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480"> <Grid x:Name="LayoutRoot" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Border BorderBrush="Black" BorderThickness="1" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" MinWidth="50"> </Border> <controls:GridSplitter Grid.Column="1" VerticalAlignment="Stretch" Width="Auto" ></controls:GridSplitter> <Border BorderBrush="Blue" BorderThickness="1" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Grid.Column="2" MinWidth="50"></Border> </Grid> </UserControl>

    Read the article

  • Delete element from Set

    - by Blitzkr1eg
    Hi. I have 2 classes Tema(Homework) and Disciplina (course), where a Course has a Set of homeworks. In Hibernate i have mapped this to a one-to-many associations like this: <class name="model.Disciplina" table="devgar_scoala.discipline" > <id name="id" > <generator class="increment"/> </id> <set name="listaTeme" table="devgar_scoala.teme"> <key column="Discipline_id" not-null="true" ></key> <one-to-many class="model.Tema" ></one-to-many> </set> </class> <class name="model.Tema" table="devgar_scoala.teme" > <id name="id"> <generator class="increment" /> </id> <property name="titlu" type="string" /> <property name="cerinta" type="binary"> <column name="cerinta" sql-type="blob" /> </property> </class> The problem is that it will add (insert rows in the table 'Teme') but it won't delete any rows and i get no exceptions thrown. Im using the merge() method.

    Read the article

  • Is it possible to use SqlGeography with Linq to Sql?

    - by cofiem
    I've been having quite a few problems trying to use Microsoft.SqlServer.Types.SqlGeography. I know full well that support for this in Ling to Sql is not great. I've tried numerous ways, beginning with what would the expected way (Database type of geography, CLR type of SqlGeography). This produces the NotSupportedException, which is widely discussed via blogs. I've then gone down the path of treating the geography column as a varbinary(max), as geography is a UDT stored as binary. This seems to work fine (with some binary reading and writing extension methods). However, I'm now running into a rather obscure issue, which does not seem to have happened to many other people. System.InvalidCastException: Unable to cast object of type 'Microsoft.SqlServer.Types.SqlGeography' to type 'System.Byte[]'. This error is thrown from an ObjectMaterializer when iterating through a query. It seems to only occur when the tables containing geography columns are included in a query implicitly (ie. using the EntityRef<> properties to do joins). System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() My question: If I'm retrieving the geography column as varbinary(max), I might expect the reverse error: can't cast byte[] to SqlGeography. That I would understand. This I don't. I do have some properies on the partial LINQ to SQL classes that hide the binary conversion... could those be the issue? Any help appreciated, and I know there's probably not enough information.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >