Search Results

Search found 31606 results on 1265 pages for 'generate table'.

Page 526/1265 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • SSRS: Report label position dynamic

    - by Nauman
    I have a report which displays customer address in multiple labels. My customers use windowed envelopes for mailing. I need the address labels position to be configurable. Something like, I'll have a database table which stores the Top/Left position of each label per customer. Based on this table, I need to position the address labels on my report. I thought, it is doable by expressions, but Location property doesn't provides ability to set an expression and make the label's top and left dynamic. Anybody, any ideas, on how to achieve this?

    Read the article

  • MySQL: Transactions across multiple threads

    - by Zombies
    Preliminary: I have an application which maintains a thread pool of about 100 threads. Each thread can last about 1-30 seconds before a new task replaces it. When a thread end, that thread almost always will result in inserting 1-3 records into a table, this table is used by all of the threads. Right now, no transactional support exists, but I am trying to add that now. So... Goal I want to implement a transaction for this. The rules for whether or not this transaction commits or rollback reside in the main thread. Basically there is a simple function that will return a boolean. Can I implement a transaction across multiple connections? If not, can multiple threads share the same connection? (Note: there are a LOT of inserts going on here, and that is a requirement).

    Read the article

  • dump csv from sqlalchemy

    - by afilatun
    For some reason, I want to dump a table from a database (sqlite3) in the form of a csv file. I'm using a python script with elixir (based on sqlalchemy) to modify the database. I was wondering if there is any way to dump the table I use to csv. I've seen sqlalchemy serializer but it doesn't seem to be what I want. Am I doing it wrong? Should I call the sqlite3 python module after closing my sqlalchemy session to dump to a file instead? Or should I use something homemade?

    Read the article

  • How do you update the aspnetdb membership IsApproved value?

    - by Matt
    I need to update existing users IsApproved status in the aspnet_Membership table. I have the code below that does not seem to be working. The user.IsApproved property is updated but it is not saving it to the database table. Are there any additional calls I need to make? Any suggestions? Thanks. /// <summary> /// Updates a users approval status to the specified value /// </summary> /// <param name="userName">The user to update</param> /// <param name="isApproved">The updated approval status</param> public static void UpdateApprovalStatus(string userName, bool isApproved) { MembershipUser user = Membership.GetUser(userName); if (user != null) user.IsApproved = isApproved; }

    Read the article

  • SQL count NULL cells

    - by Giuseppe
    Dear All, I have the following problem. I have a table in a db, with many columns. I can do different kind of select queries, to show, for example, for each record that satisfies a condition: all cells from columns with names ending in _t0 all cells from columns with names ending in _t1 ... To get the column lists to form the queries I use the information schema. Now, the problem: each query returns a record with a subset of the columns of the big table. This means that I can get a row of (all!) NULLs. How can I ask my query to reject such rows without having to type in explicitely the column names (i.e. by saying where col_1 is not null, col_2 is not null...)? Is it possible? Thanks in advance!!! Sep

    Read the article

  • MarkupBuilder using list

    - by tathamr
    I am currently using sql.row("statement") and storing to a list. I then am trying to setup my xml file using MarkupBuilder. Is there a better way than iterating over the list poping off an item and then parsing it to add my different column names and values? What is stored by list entry is ID='X' Period='Yearly' Lengh='test' So the XML would be something similar to: <table name='test'> <row> <column name=ID>X</column> <column name=Period>Yearly</column> <column name=Length>test</column> </row> </table>

    Read the article

  • Ajax AsyncFileUpload contains Filename Every time

    - by Kartik Patel
    I have used the Ajax AsyncFileUpload.I have three field. 1.Name 2.Asynchronous File Upload 3.Description 4.Save buttton when I click on Save new Record created.after creating new record when i enter all above details except select the Asynchronous File Upload.However when i click on Save button the Asynchronous File Upload contains the before Asynchronous upload File Name inspite of i didnt select the File from File Upload...How its possible getting confused.. My code is like this i have used master page. <asp:Content ID="Content2" ContentPlaceHolderID="body" runat="server"> <script type="text/javascript" language="javascript"> function UploadComplete() { document.getElementById('<%=lblmsg.ClientID %>').innerHTML = "Image Uploaded Successfully."; } function UploadError() { document.getElementById('<%=lblmsg.ClientID %>').innerHTML = "Image Upload Failed."; } </script> <table> <tr> <td colspan="2"> <h1 style="color: #008000"> Add Project Details</h1> </td> </tr> <tr> <td align="left"> <asp:Label ID="lblProjectName" runat="server" Text="Project Name" Font-Bold="true"></asp:Label> </td> <td align="left"> <asp:TextBox ID="txtProjectName" runat="server" MaxLength="50" Width="150px" ValidationGroup="Save"></asp:TextBox> <asp:RequiredFieldValidator ID="rfvprojectname" runat="server" Text="Project Name is Required." ErrorMessage="Project Name is Required." ControlToValidate="txtProjectName" ForeColor="Red" ValidationGroup="Save"></asp:RequiredFieldValidator> </td> </tr> <tr> <td colspan="2"> </td> </tr> <tr> <td align="left"> <asp:Label ID="lblselectimage" runat="server" Text="Select Image" Font-Bold="true"></asp:Label> </td> <td align="left"> <table> <tr> <td> <cc1:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server"> </cc1:ToolkitScriptManager> <cc1:AsyncFileUpload ID="AsyncFileUpload1" runat="server" OnClientUploadComplete="UploadComplete" OnClientUploadError="UploadError" CompleteBackColor="White" Width="350px" UploaderStyle="Traditional" UploadingBackColor="#CCFFFF" ThrobberID="imgLoad" OnUploadedComplete="fileuploadComplete" ClientIDMode="AutoID" EnableViewState="true"/> </td> <td> <asp:Image ID="imgUpload" runat="server" Width="50px" Height="50px" /> </td> </tr> </table> </td> </tr> <tr> <td> </td> <td> <asp:Image ID="imgLoad" runat="server" ImageUrl="~/Images/loading-gif-animation.gif" Width="50px" Height="50px" /> <asp:Label ID="lblmsg" runat="server" ForeColor="Blue" Font-Bold="true"></asp:Label> </td> </tr> <tr> <td align="left"> <asp:Label ID="lblDescription" runat="server" Text="Description" Font-Bold="true"></asp:Label> </td> <td align="left"> <asp:TextBox ID="txtDescription" runat="server" MaxLength="1000" Width="300" TextMode="MultiLine" ValidationGroup="Save" Height="100px"></asp:TextBox> <asp:RequiredFieldValidator ID="RfvtxtDescription" runat="server" Text="Project Description is Required." ErrorMessage="Project Description is Required." ControlToValidate="txtDescription" ForeColor="Red" ValidationGroup="Save"></asp:RequiredFieldValidator> </td> </tr> <tr> <td> </td> <td align="left"> <asp:ImageButton ID="btnsave" runat="server" ImageUrl="~/Images/Save.jpg" OnClick="btnSave_Click" Height="37px" ValidationGroup="Save" /> </td> </tr> </table>

    Read the article

  • What does SQL Server execution plan show?

    - by tim
    There is the following code: declare @XmlData xml = '<Locations> <Location rid="1"/> </Locations>' declare @LocationList table (RID char(32)); insert into @LocationList(RID) select Location.RID.value('@rid','CHAR(32)') from @XmlData.nodes('/Locations/Location') Location(RID) insert into @LocationList(RID) select A2RID from tblCdbA2 Table tblCdbA2 has 172810 rows. I have executed the batch in SSMS with “Include Actual execution plan “ and having Profiler running. The plan shows that the first query cost is 88% relative to the batch and the second is 12%, but the profiler says that durations of the first and second query are 17ms and 210 ms respectively, the overall time is 229, which is not 12 and 88.. What is going on? Is there a way how I can determine in the execution plan which is the slowest part of the query?

    Read the article

  • MYSQL query to get all entries with specific time, from PHP?

    - by meds
    I'm trying to query a mysql table which places its date in the following format: yyyy-mm-dd hh:mm:ss So it's date and time, and that's all in a single field. Now from php I want to get the time and query the table to only return entries where the date field is less than 24 hours old. I'm having issues with the system because PHPs get time seems to return the values seperately and I'm struggling to figure out how to make it work with mysql queries. This seems fairly simple but I'm quite new to php so sorry if I'm completely missing something..

    Read the article

  • Query to find all the nodes that are two steps away from a particular node.

    - by iecut
    Suppose I have two columns in a table that represents a graph, the first column is a FROMNODE and second one is TONODE. What I would like to know is that how will we find all the nodes that are two steps away from a particular node. Lets suppose I have a node numbered '1' and i would like to know all the nodes that are two steps away from it. I have tried(I am assuming the table name as graph) SELECT FROMNODE FROM GRAPH WHERE TONODE=1 (this is to select all the nodes that are connected to node 1, but I couldn't figure out how would I find all the nodes that are two steps away from node 1??)

    Read the article

  • LRU caches in C

    - by lazyconfabulator
    I need to cache a large (but variable) number of smallish (1 kilobyte to 10 megabytes) files in memory, for a C application (in a *nix environment). Since I don't want to eat all my memory, I'd like to set hard memory limit (say, 64 megabytes) and push files into a hash table with the file name as the key and dispose of the entries with the least use. What I believe I need is an LRU cache. Really, I'd rather not roll my own so if someone knows where I can find a workable library, please point the way? Failing that, can someone provide a simple example of an LRU cache in C? Related posts indicated that a hash table with a doubly-linked list, but I'm not even clear on how a doubly-linked list keeps LRU. Side note: I realize this is almost exactly the function of memcache, but it's not an option for me. I also took a look at the source hoping to enlighten myself on LRU caching, with no success.

    Read the article

  • UITableViewController's TableView becomes NULL

    - by Travis
    I have UITableViewController (initiated by the Navigation-based app project template). I am overriding loadView and putting up an alternative view (w/ a UILabel and UIActivityIndicator) to display while the table's contents is loading. When the loading is done, I remove the loading view and try to display the table view but I see that it's NULL. So in the simulator I see my loading view and then when the loading's done the view disappears but my tableview never comes. I'm confused what the difference is between self.view and self.tableView in my UITableViewController and how I can exchagn

    Read the article

  • How can I turn a bunch of rows into aggregated columns WITHOUT using pivot in SQL Server 2005?

    - by cdeszaq
    Here is the scenario: I have a table that records the user_id, the module_id, and the date/time the module was viewed. eg. Table: Log ------------------------------ User_ID Module_ID Date ------------------------------ 1 red 2001-01-01 1 green 2001-01-02 1 blue 2001-01-03 2 green 2001-01-04 2 blue 2001-01-05 1 red 2001-01-06 1 blue 2001-01-07 3 blue 2001-01-08 3 green 2001-01-09 3 red 2001-01-10 3 green 2001-01-11 4 white 2001-01-12 I need to get a result set that has the user_id as the 1st column, and then a column for each module. The row data is then the user_id and the count of the number of times that user viewed each module. eg. --------------------------------- User_ID red green blue white --------------------------------- 1 2 1 2 0 2 0 1 1 0 3 1 2 1 0 4 0 0 0 1 I was initially thinking that I could do this with PIVOT, but no dice; the database is a converted SQL Server 2000 DB that is running in SQL Server 2005. I'm not able to change the compatibility level, so pivot is out. How can I accomplish this?

    Read the article

  • deselectRowAtIndexPath on an ABPeoplePickerNavigationController

    - by Josh Wright
    I'm showing an ABPeoplePickerNavigationController as a tab in my app. The user clicks a name, then email address, then I do something with the email address. Afterwards, I'd like for the person and property that they selected to fade out (not be highlighted). In a normal table, I'd call deselectRowAtIndexPath. But with the ABPeoplePickerNavCont I don't seem to have access to it's table, nor do I know what indexPath is selected, nor is there an api for deselecting the row. On most apps, ABPeoplePickerNavCont is used modally so it doesn't matter that the row is still highlighted 'cause the whole thing gets dismissed. But in my app it does not get dismissed (just like the contacts tab in the Phone app). Any ideas?

    Read the article

  • SQL Server Multiple Joins Are Taxing The CPU

    - by durilai
    I have a stored procedure on SQL Server 2005. It is pulling from a Table function, and has two joins. When the query is run using a load test it kills the CPU 100% across all 16 cores! I have determined that removing one of the joins makes the query run fine, but both taxes the CPU. Select SKey From dbo.tfnGetLatest(@ID) a left join [STAGING].dbo.RefSrvc b on a.LID = b.ESIID left join [STAGING].dbo.RefSrvc c on a.EID = c.ESIID Any help is appreciated, note the join is happening on the same table in a different database on the same server.

    Read the article

  • PHP & MySQL deleting multiple rows script problem.

    - by oReiLLy
    I'm trying to delete two tables rows from two different tables at once when a user clicks the delete button, but for some reason I cant get the table rows to delete can some one help me figure out what is wrong with my script? Thanks Here is the MySQL tables. CREATE TABLE cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, file VARCHAR(255) NOT NULL, case VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE users_cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, cases_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) ); Here is the PHP & MySQL script. if(isset($_POST['delete_case'])) { $cases_ids = array(); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT cases.*, users_cases.* FROM cases INNER JOIN users_cases ON users_cases.cases_id = cases.id WHERE users_cases.user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $cases_ids[] = $row["cases_id"]; } } foreach($_POST['delete_id'] as $di) { if(in_array($di, $cases_ids)) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"DELETE FROM users_cases WHERE cases_id = '$delete_id'"); $dbc2 = mysqli_query($mysqli,"DELETE FROM cases WHERE id = '$delete_id'"); } } } Here is the XHTML. <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li>

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • How to combine two sql queries?

    - by plasmuska
    Hi Guys, I have a stock table and I would like to create a report that will show how often were items ordered. "stock" table: item_id | pcs | operation apples | 100 | order oranges | 50 | order apples | -100 | delivery pears | 100 | order oranges | -40 | delivery apples | 50 | order apples | 50 | delivery Basically I need to join these two queries together. A query which prints stock balances: SELECT stock.item_id, Sum(stock.pcs) AS stock_balance FROM stock GROUP BY stock.item_id; A query which prints sales statistics SELECT stock.item_id, Sum(stock.pcs) AS pcs_ordered, Count(stock.item_id) AS number_of_orders FROM stock GROUP BY stock.item_id, stock.operation HAVING stock.operation="order"; I think that some sort of JOIN would do the job but I have no idea how to glue queries together. Desired output: item_id | stock_balance | pcs_ordered | number_of_orders apples | 0 | 150 | 2 oranges | 10 | 50 | 1 pears | 100 | 100 | 1 This is just example. Maybe, I will need to add more conditions because there is more columns. Is there a universal technique of combining multiple queries together?

    Read the article

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • DataGridRow Cells property

    - by Michal Krawiec
    I would like to get to DataGridRow Cells property. It's a table of cells in a current DataGrid. But I cannot get access direct from code nor by Reflection: var x = dataGridRow.GetType().GetProperty("Cells") //returns null Is there any way to get this table? And related question - in Watch window (VS2008) regular properties have an icon of a hand pointing on a sheet of paper. But DataGridRow.Cells has an icon of a hand pointing on a sheet of paper with a little yellow envelope in a left bottom corner - what does it mean? Thanks for replies.

    Read the article

  • How does void QTableWidget::setItemPrototype ( const QTableWidgetItem * item ) clones objects?

    - by chappar
    QTableWidget::setItemPrototype says following. "The table widget will use the item prototype clone function when it needs to create a new table item. For example when the user is editing in an empty cell. This is useful when you have a QTableWidgetItem subclass and want to make sure that QTableWidget creates instances of your subclass." How does this actually work as you can pass any of the QTableWidgetItem subclass pointer to setItemPrototype and at run time there is no way you can get the size of an object having just pointer to it?

    Read the article

  • LOAD DATA INFILE not working in mariadb

    - by Haseena
    Iam trying to migrate from mysql to mariadb. On this time I can face an issue with mariadb. When I can trying to load a data file into a table, it shows an error like : SQL Error (29): File 'C:/Documents and Settings/Administrator/Local Settings/Temp/SAMPLE/DATA_TEMP1351761841668/SampleFile0' not found (Errcode: 2) But the file already exists in the path.... Another one point is that the same command successfully works with MySQL. Is MariaDB has any permission issue? Login as Administrator. See below my query : load data infile "'C:/Documents and Settings/Administrator/Local Settings/Temp/SAMPLE/DATA_TEMP1351761841668/SampleFile0" into table SAMPLETABLE; When changing the path loke "C:/SampleFile0", its working properly. From Administrator folder it doesn't working. Can anyone help me in this regard??? Iam a newone in MariaDB.

    Read the article

  • how to convert Database Hierarchical Data to XML using ASP.net 3.5 and LINQ

    - by mahdiahmadirad
    hello guys! i have a table with hierarchical structure. like this: and table data shown here: this strategy gives me the ability to have unbounded categories and sub-categories. i use ASP.net 3.5 SP1 and LINQ and MSSQL Server2005. How to convert it to XML? I can Do it with Dataset Object and ".GetXML()" method. but how to implement it with LINQtoSQL or LINQtoXML??? or if there is another simpler way to perform that? what is your suggestion? the best way? I searched the web but found nothing for .net 3.5 featuers.

    Read the article

  • Using PHP variables inside SQL statements?

    - by Homer
    For some reason I can't pass a var inside a mysql statement. I have a function that can be used for multiple tables. So instead of repeating the code I want to change the table that is selected from like so, function show_all_records($table_name) { mysql_query("SELECT * FROM $table_name"); etc, etc... } And to call the function I use show_all_records("some_table") or show_all_records("some_other_table") depending on which table I want to select from at the moment. But it's not working, is this because variables can't be passed through mysql statements?

    Read the article

  • Help needed with MySQL query to join data spanning multiple tables with data used as column names

    - by gurun8
    I need a little help putting together a SQL query that will give me the following resultsets: and The data model looks like this: The tricky part for me is that the columns to the right of the "Product" in the resultset aren't really columns in the database but rather key/value pairs spanned across the data model. Table data is as follows: My apologies in advance for the image heavy question and the image quality. This just seemed like the easiest way to convey the information. It'll probably take someone less time to write the query statement to achieve the results than it did for me to assemble this question. By the way, the "product_option" table image is truncated but it illustrated the general idea of the data structure. The MySQL server version is 5.1.45.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >