Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 368/405 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Transfering data from Excel to dataGridView

    - by Panecillo
    I have a problem when I want to transfer data from Excel to dataGridView in C#. My Excel's column has numeric and alphanumeric values. But for example, if the column has 3 numbers and 2 alphanumeric values then only the numbers are shown in the dataGridView, and vice versa. Why aren't all the values shown? The next is what happen: Excel's Column: DataGridView's Column: 45654 45654 P745K 31233 31233 23111 23111 45X2Y Here is my code to load the dataGridView: string connectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\test.xls;Extended Properties=""Excel 8.0;HDR=YES;"""; DbProviderFactory factory = DbProviderFactories.GetFactory("System.Data.OleDb"); DbDataAdapter adapter = factory.CreateDataAdapter(); DbCommand selectCommand = factory.CreateCommand(); selectCommand.CommandText = "SELECT * FROM [sheet1$]"; DbConnection connection = factory.CreateConnection(); connection.ConnectionString = connectionString; selectCommand.Connection = connection; adapter.SelectCommand = selectCommand; data = new DataSet(); adapter.Fill(data); dataGridView1.DataSource = data.Tables[0].DefaultView; I hope I explained it well. Sorry my bad english. Thanks.

    Read the article

  • Should Service Depend on Many Repositories, or Break Them Up?

    - by Josh Pollard
    I'm using a repository pattern for my data access. So I basically have a repository per table/class. My UI currently uses service classes to actually get things done, and these service classes wrap, and therefore depend on repositories. In many cases my services are only dependent upon one or two repositories, so things aren't too crazy. Unfortunately, one of my forms in the UI expects the user to enter data that will span five different tables. For this form I made a single service class that depends upon five repositories. Then the methods within the service for saving and loading the data call the appropriate methods on all of the corresponding repositories. As you can imagine, the save and load methods in this service are really big. Also, unit testing this service is getting really difficult because I have to setup so many fake repositories. Would it have been a better choice to break this single service apart into a few smaller services? It would put more code at the UI layer, but would make the services smaller and more testable.

    Read the article

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • Programming Practice

    - by deepti
    public DataTable UserUpdateTempSettings(int install_id, int install_map_id, string Setting_value,string LogFile) { SqlConnection oConnection = new SqlConnection(sConnectionString); DataSet oDataset = new DataSet(); DataTable oDatatable = new DataTable(); SqlDataAdapter MyDataAdapter = new SqlDataAdapter(); try { oConnection.Open(); cmd = new SqlCommand("SP_HOTDOC_PRINTTEMPLATE_PERMISSION", oConnection); cmd.Parameters.Add(new SqlParameter ("@INSTALL_ID", install_id)); cmd.Parameters.Add(new SqlParameter ("@INSTALL_MAP_ID", install_map_id)); cmd.Parameters.Add(new SqlParameter("@SETTING_VALUE", Setting_value)); if (LogFile != "") { cmd.Parameters.Add(new SqlParameter("@LOGFILE",LogFile)); } cmd.CommandType = CommandType.StoredProcedure; MyDataAdapter.SelectCommand = cmd; cmd.ExecuteNonQuery(); MyDataAdapter.Fill(oDataset); oDatatable = oDataset.Tables[0]; return oDatatable; } catch (Exception ex) { Utils.ShowError(ex.Message); return oDatatable; } finally { if ((oConnection.State != ConnectionState.Closed) || (oConnection.State != ConnectionState.Broken)) { oConnection.Close(); } oDataset = null; oDatatable = null; oConnection.Dispose(); oConnection = null; } } i have used execute non query.. normally its not used with data adapter... if iam not using its giving me error.. is it bad programming practice to use execute non query with data adapter

    Read the article

  • Left Join only returning one row

    - by Adam
    I am trying to join two tables. I would like all the columns from the product_category table (there are a total of 6 now) and count the number of products, CatCount, that are in each category from the products_has_product_category table. My query result is 1 row with the first category and a total count of 68, when I am looking for 6 rows with each individual category's count. <?php $result = mysql_query(" SELECT a.*, COUNT(b.category_id) AS CatCount FROM `product_category` a LEFT JOIN `products_has_product_category` b ON a.product_category_id = b.category_id "); while($row = mysql_fetch_array($result)) { echo ' <li class="ui-shadow" data-count-theme="d"> <a href="' . $row['product_category_ref_page'] . '.php" data-icon="arrow-r" data-iconpos="right">' . $row['product_category_name'] . '</a><span class="ui-li-count">' . $row['CatCount'] . '</span></li>'; } ?> I have been working on this for a couple of hours and would really appreciate any help on what I am doing wrong.

    Read the article

  • How should I solve this MySql problem (PHP) ? (Beginner)

    - by Camran
    I have several tables in a MySql database. I have a classifieds website, and at the bottom I display the users last visited classifieds. I do this by storing the ID:s of the ads to an array in the cookie. Now, my db is made up like this kindof: Main Table: // Stores global information, ie these fields have to be filled out in every record, never be blank ID Price category Seller Item Table: // Stores descriptive info about whats for sale ID AD_ID (FK) //This is the same as ID in the MAIN TABLE Color Size Mileage etc My problem is that I need to know what category the ad is in, in order to query mysql for the right information I think. So I need two variables, but the cookie only has one (ID) stored. Offcourse I could make two queries, first one just matching the ID to the main_table and fetch the category from the Main_table. Then make the second query and fetch all other info from the right table. Here is an example if the category was Vehicles: SELECT * FROM main_table, vehicles_table, WHERE main_table.id=$id_from_cookie AND main_table.ad_id=vehicles_table.ad_id As you can see above, I need the category to write in what table to check, right? But I think there must be a smarter way, like fetching them in one single query using only one variable (id from cookie)? How should I do this? Understand? Let me know if you need more input... Thanks

    Read the article

  • Zend Metadata Cache in file

    - by Matthieu
    I set up a metadata cache in Zend Framework because a lot of DESCRIBE queries were executed and it affected the performances. $frontendOptions = array ('automatic_serialization' => true); $backendOptions = array ('cache_dir' => CACHE_PATH . '/db-tables-metadata'); $cache = Zend_Cache::factory( 'Core', 'File', $frontendOptions, $backendOptions ); Zend_Db_Table::setDefaultMetadataCache($cache); I can indeed see the cache files created, and the website works great. However, when I launch unit tests, or a script of the same application that perform DB queries, I end up with an error because Zend couldn't read the cache files. This is because in the website, the cache files are created by the www user, and when I run phpunit or a script, it tries to read them with my user and it fails. Do you see any solution to that? I have some quickfix ideas but I'm looking for a good/stable solution. And I'd rather avoid running phpunit or the scripts as www if possible (for practical reasons).

    Read the article

  • Assigning values from list to list excluding nulls

    - by GutierrezDev
    Hi. Sorry for the title but I don't know other way of asking. I got 2 Dictionary < string,string list One of them has a length of 606 items including null values. The other one has a length of 285 items. I determined the length doing this: int count = 0; for (int i = 0; i < temp.Length; i++) { if (temp[i] != null) count++; } Now I want to assign every value in the temp variable to another variable excluding the null values. Any Idea? Remember that I have a different size variables. Edited Dictionary<string, string>[] info; Dictionary<string, string>[] temp = new Dictionary<string, string>[ds.Tables[0].Rows.Count]; .... Here I added some data to the Temp variable ..... Then this: int count = 0 for (int i = 0; i < temp.Length; i++) { if (temp[i] != null) count++; } info = new Dictionary<string, string>[count]; Hope you understand now.

    Read the article

  • CakePHP pagination with HABTM models

    - by nickf
    I'm having some problems with creating pagination with a HABTM relationship. First, the tables and relationships: requests (id, to_location_id, from_location_id) locations (id, name) items_locations (id, item_id, location_id) items (id, name) So, a Request has a Location the request is coming from and a Location the Request is going to. For this question, I'm only concerned about the "to" location. Request --belongsTo--> Location* --hasAndBelongsToMany--> Item (* as "ToLocation") In my RequestController, I want to paginate all the Items in a Request's ToLocation. // RequestsController var $paginate = array( 'Item' => array( 'limit' => 5, 'contain' => array( "Location" ) ) ); // RequestController::add() $locationId = 21; $items = $this->paginate('Item', array( "Location.id" => $locationId )); And this is failing, because it is generating this SQL: SELECT COUNT(*) AS count FROM items Item WHERE Location.id = 21 I can't figure out how to make it actually use the "contain" argument of $paginate... Any ideas?

    Read the article

  • Use a table as container or not?

    - by Camran
    I have created my entire website by using a main table, and having the content inside the table. Some ppl suggest this is wrong, and I would like to know what to do specifically in my situation. On my index.html I have a <table align="center"> and then all content in it. Now the content is in form of DIVS and they all have relative positioning, so they are relative to the table. For example: <table align="center"> <tr> <td> <div style="position:relative; left:14px; top:50px;"> CONTENT HERE... (Form, divs, images) </div> </td> </tr> </table> I have just gotten to the stage of browser compatibility. Without making any changes whatsoever, my website works perfect in FF, SAFARI, Chrome and Opera. Only trouble with IE. So for my Q, if you where me, would you change my layout and remove the tables, and instead use alot more css and alot more DIVS, or would you go with what I have? And please if you answer me, don't just provide me with the answer, but also a "why" that is... in other words, give me arguments and facts... Thanks

    Read the article

  • Duplicate information from sql result

    - by puddleJumper
    I looked in about 18 other posts on here an most people are asking how to delete the records not just hide them. So my problem: I have a database with staff members who are associated with locations. Many of the staff members are associated with more than one location. What I want to do is to only display the first location listed in the mysql result and skip over the others. I have the sql query linking the tables together and it works aside from it showing the same information for those staff members that are in those other locations multiple times so example would be like this: This is the sql statement I have currently SELECT staff_tbl.staffID, staff_tbl.firstName, staff_tbl.middleInitial, staff_tbl.lastName, location_tbl.locationID, location_tbl.staffID, officelocations_tbl.locationID, officelocations_tbl.officeName, staff_title_tbl.title_ID, staff_title_tbl.staff_ID, titles_tbl.titleID, titles_tbl.titleName FROM staff_tbl INNER JOIN location_tbl ON location_tbl.staffID = staff_tbl.staffID INNER JOIN officelocations_tbl ON location_tbl.locationID = officelocations_tbl.locationID INNER JOIN staff_title_tbl ON staff_title_tbl.staff_ID = staff_tbl.staffID INNER JOIN titles_tbl ON staff_title_tbl.title_ID = titles_tbl.titleID and my php is <?php do { ?> <tr> <td><?php echo $row_rs_Staff_Info['firstName']; ?>&nbsp; <?php echo $row_rs_Staff_Info['lastName']; ?></td> <td><?php echo $row_rs_Staff_Info['titleName']; ?>&nbsp; </td> <td><?php echo $row_rs_Staff_Info['officeName']; ?>&nbsp; </td> </tr> <?php } while ($row_mysqlResult = mysql_fetch_assoc($rs_mysqlResult)); ?> What I would like to know is there a way using php to select only the first entry listed for each person and display that and just skip over the other two. I was thinking it could be done by possibly adding the staffID's to an array and if they are in there to skip over the next one listed in the staff_title_tbl but wasn't quite sure how to write it that way. Any help would be great thank you in advance.

    Read the article

  • How to optimize this MYSQL table?

    - by Lost_in_code
    This is for an upcoming project. I have two tables - first one keeps tracks of photos, and the second one keeps track of the photo's rank Photos: +-------+-----------+------------------+ | id | photo | current_rank | +-------+-----------+------------------+ | 1 | apple | 5 | | 2 | orange | 9 | +-------+-----------+------------------+ The photo rank keeps changing on a regular basis and this is the table that tracks it: Ranks: +-------+-----------+----------+-------------+ | id | photo_id | ranks | timestamp | +-------+-----------+----------+-------------+ | 1 | 1 | 8 | * | | 2 | 2 | 2 | * | | 3 | 1 | 3 | * | | 4 | 1 | 7 | * | | 5 | 1 | 5 | * | | 6 | 2 | 9 | * | +-------+-----------+----------+-------------+ * = current timestamp Every rank is tracked for reporting/analysis purpose. I talked to someone who has experience in this field and he told me that storing ranks like above is the way to go. But I'm not so sure yet. The problem here is data redundancy. There are going to be tens of thousands of photos. The photo rank changes on a hourly basis (many time within minutes) for recent photos but less frequently for older photos. At this rate the table will have millions of records within months. And since I do not have experience in working with large databases, this makes me a little nervous. I thought of this: Ranks: +-------+-----------+--------------------+ | id | photo_id | ranks | +-------+-----------+--------------------+ | 1 | 1 | 8:*,3:*,7:*,5:* | | 2 | 2 | 2:*,9:* | +-------+-----------+--------------------+ * = current timestamp That means some extra code in PHP to split the rank/time (and sorting) but that looks OK to me. Is this a correct way to optimize the table for performance? What would you recommend? Any suggestions would be great.

    Read the article

  • Using data.table to aggregate

    - by dayne
    After multiple suggestions from SO users, I am finally trying to convert my code over to using data.tables. library(data.table) DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10) > DT plate id val 1: plate1 CTRL 1 2: plate1 CTRL 2 3: plate1 ID1 3 4: plate1 ID2 4 5: plate1 ID3 5 6: plate2 CTRL 6 7: plate2 CTRL 7 8: plate2 ID1 8 9: plate2 ID2 9 10: plate2 ID3 10 What I would like to do is take the average of DT[,val] by plate when the id is "CTRL". I would normally aggregate the data frame, then use match to map the values back to a new column, 'ctrl'. Using the data.table package I can get: DT[id=="CTRL",ctrl:=mean(val),by=plate] > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 NA 4: plate1 ID2 4 NA 5: plate1 ID3 5 NA 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 NA 9: plate2 ID2 9 NA 10: plate2 ID3 10 NA What I need is really: DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10, ctrl = rep(c(1.5,6.5),each=5)) > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 1.5 4: plate1 ID2 4 1.5 5: plate1 ID3 5 1.5 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 6.5 9: plate2 ID2 9 6.5 10: plate2 ID3 10 6.5 Eventually I would like to use much more complicated selections of the values, but I do not know how to select specific values, run some function, then map those values back to the appropriate row using data frames.

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • Managing modes in Windows application working directly with SQL Server 2008

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know it's not these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • Incorrect usage of UPDATE and ORDER BY

    - by nico55555
    I have written some code to update certain rows of a table with a decreasing sequence of numbers. To select the correct rows I have to JOIN two tables. The last row in the table needs to have a value of 0, the second last -1 and so on. To achieve this I use ORDER BY DESC. Unfortunately my code brings up the following error: Incorrect usage of UPDATE and ORDER BY My reading suggests that I can't use UPDATE, JOIN and ORDER BY together. I've read that maybe subqueries might help? I don't really have any idea how to change my code to do this. Perhaps someone could post a modified version that will work? while($row = mysql_fetch_array( $result )) { $products_id = $row['products_id']; $products_stock_attributes = $row['products_stock_attributes']; mysql_query("SET @i = 0"); $result2 = mysql_query("UPDATE orders_products op, orders ord SET op.stock_when_purchased = (@i:=(@i - op.products_quantity)) WHERE op.orders_id = ord.orders_id AND op.products_id = '$products_id' AND op.products_stock_attributes = '$products_stock_attributes' AND op.stock_when_purchased < 0 AND ord.orders_status = 2 ORDER BY orders_products_id DESC") or die(mysql_error()); }

    Read the article

  • MYSQL how to sum rows with same key, then delete the duplicate rows

    - by Bhante-S
    What I have: key data 1      22 1       5 2       6 3       1 3      -3 What I want: key data 1      27 2       6 3      -2 I don’t mind doing this with two or more queries, esp. if they are simple--makes for easier maintenance. Also the tables are fairly small (<2,000 records). The ‘key’ field is indexed and allows duplicates. Muchas Gracias

    Read the article

  • SQL - Query to display average as either "longer than" or "shorter than"

    - by user1840801
    Here are the tables I've created: CREATE TABLE Plane_new (Pnum char(3), Feature varchar2(20), Ptype varchar2(15), primary key (Pnum)); CREATE TABLE Employee_new (eid char(3), ename varchar(10), salary number(7,2), mid char(3), PRIMARY KEY (eid), FOREIGN KEY (mid) REFERENCES Employee_new); CREATE TABLE Pilot_new (eid char(3), Licence char(9), primary key (eid), foreign key (eid) references Employee_new on delete cascade); CREATE TABLE FlightI_new (Fnum char(4), Fdate date, Duration number(2), Pid char(3), Pnum char(3), primary key (Fnum), foreign key (Pid) references Pilot_new (eid), foreign key (Pnum) references Plane_new); And here is the query I must complete: For each flight, display its number, the name of the pilot who implemented the flight and the words ‘Longer than average’ if the flight duration was longer than average or the words ‘Shorter than average’ if the flight duration was shorter than or equal to the average. For the column holding the words ‘Longer than average’ or ‘Shorter than average’ make a header Length. Here is what I've come up with - with no luck! SELECT F.Fnum, E.ename, CASE Length WHEN F.Duration>(SELECT AVG(F.Duration) FROM FlightI_new F) THEN "Longer than average" WHEN F.Duration<=(SELECT AVG(F.Duration) FROM FlightI_new F) THEN 'Shorter than average' END FROM FlightI_new F LEFT OUTER JOIN Employee_new E ON F.Pid=E.eid GROUP BY F.Fnum, E.ename; Where am I going wrong?

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • How to use JOIN using Hibernate's session.createSQLQuery()

    - by javauser71
    Hi All, I have two Entity (tables) - Employee & Project. An Employee can have multiple Projects. Project table's CREATOR_ID field refers to Employee table's ID field. Employee entity maintains a list of Project. Using EntityManager following query works fine - "entityManager.createQuery("select e from EmployeeDTO e, ProjectDTO p where p.id = ?1 and p.creator.id=e.id"); But since I have the LAZY association relationship, I get error: "Could not initialize proxy - no Session" if I try to access Project info from Employee entity. This is expected and so I am using Hibernate's Session to create query as shown below. Session session = HibernateUtil.getSessionFactory().openSession(); org.hibernate.Query q = session.createSQLQuery("SELECT E FROM EMPLOYEE_TAB E, PROJECT_TAB P WHERE P.ID = " + projectId + " AND P.CREATOR_ID = E.ID") .addEntity("EmployeeDTO ", EmployeeDTO.class) .addEntity("ProjectDTO", ProjectDTO.class); But I get error like: "Column 'E' is either not in any table in the FROM list or appears within a join specification and is outside the scope of the join specification..." Can anyone suggest what will be the right JOIN syntax for such case? If I use ("SELECT * FROM EMPLOYEE_TAB E, ........") - it gives other error: "java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to com.im.server.dto.EmployeeDTO". Thanks in advance.

    Read the article

  • What is the best way to structure those jquery call backs?

    - by user518138
    I am new to jquery and recently have been amazed how powerful those call backs are. But here I got some logic and I am not sure what the best way is. Basically, I got a bunch of tables in web sql database in chrome. I will need to go through say table A, table B and table C. I need to submit each row of the table to server. Each row represents a bunch of complicated logic and data url and they have to be submitted in the order of A - B - C. The regular java way would be: TableA.SubmitToServer() { query table A; foreach(row in tableA.rows) { int nRetID = submitToServer(row); do other updates.... } } Similar for tableB and table C. then just call: TableA.SubmitToServer(); TableB.SubmitToServer(); TableC.SubmitToServer(); //that is very clear and easy. But in JQuery, it probably will be: db.Transaction(function (tx){ var strQuery = "select * from TableA"; tx.executeSql(strQuery, [], function (tx, result){ for(i = 0 ; i < result.rows.length; i++) { submitTableARowToServer(tx, result.rows.getItem(i), function (tx, result) { //do some other related updates based on this row from tableA //also need to upload some related files to server... }); } }, function errorcallback....) }); As you can see, there are already enough nested callbacks. Now, where should I put the process for TableB and tableC? They all need to have similar logic and they can only be called after everything is done from TableA. So, What is the best way to do this in jquery? Thanks

    Read the article

  • Replace beginning words

    - by Newbie
    I have the below tables. tblInput Id WordPosition Words -- ----------- ----- 1 1 Hi 1 2 How 1 3 are 1 4 you 2 1 Ok 2 2 This 2 3 is 2 4 me tblReplacement Id ReplacementWords --- ---------------- 1 Hi 2 are 3 Ok 4 This The tblInput holds the list of words while the tblReplacement hold the words that we need to search in the tblInput and if a match is found then we need to replace those. But the problem is that, we need to replace those words if any match is found at the beginning. i.e. in the tblInput, in case of ID 1, the words that will be replaced is only 'Hi' and not 'are' since before 'are', 'How' is there and it is not in the tblReplacement list. in case of Id 2, the words that will be replaced are 'Ok' & 'This'. Since these both words are present in the tblReplacement table and after the first word i.e. 'Ok' is replaced, the second word which is 'This' here comes first in the list of ID category 2 . Since it is available in the tblReplacement, and is the first word now, so this will also be replaced. So the desired output will be Id NewWordsAfterReplacement --- ------------------------ 1 How 1 are 1 you 2 is 2 me My approach so far: ;With Cte1 As( Select t1.Id ,t1.Words ,t2.ReplacementWords From tblInput t1 Cross Join tblReplacement t2) ,Cte2 As( Select Id, NewWordsAfterReplacement = REPLACE(Words,ReplacementWords,'') From Cte1) Select * from Cte2 where NewWordsAfterReplacement <> '' But I am not getting the desired output. It is replacing all the matching words. Urgent help needed*.( SET BASED )* I am using SQL Server 2005. Thanks

    Read the article

  • What would be a good Database strategy to manage these two product options?

    - by bemused
    I have a site that allows users to purchase "items" (imagine it as an Advertisement, or a download). There are 2 ways to purchase. Either a subscription, 70 items within 1 month (use them or lose them--at the end of the month your count is 0) or purchase each item individually as you need it. So the user could subscribe and get 70/month or pay for 10 and use them when they want until the 10 are gone. Maybe it's the late hour, but I can't isolate a solution I like and thought some users here would surely have stumbled upon something similar. One I can imagine is webhosts. They sell hosting for monthy fees and sell counts of things like you get 5 free domains with our reseller account. or something like a movie download site, you can subscribe and get 100 movies each month, or pay for a one-time package of 10 movies. so is this a web of tables and where would be a good cross between the product a user has purchased and how many they have left? products productID, productType=subscription, consumable, subscription&consumable subscriptions SubscriptionID, subscriptionStartDate, subscriptionEndDate, consumables consumableID, consumableName UserProducts userID,productID,productType ,consumptionLimit,consumedCount (if subscription check against dates), otherwise just check that consumedCount is < than limit. Usually I can layout my data in a way that I know it will work the way I expect, but this one feels a little questionable to me. Like there is a hidden detail that is going to creep up later. That's why I decided to ask for help if someone in the vast expanse can enlighten me with their wisdom and experience and clue me in to a satisfying strategy. Thank you.

    Read the article

  • find the difference between two very large list

    - by user157195
    I have two large list(could be a hundred million items), the source of each list can be either from a database table or a flat file. both lists are of comparable sizes, both unsorted. I need to find the difference between them. so I have 3 scenarios: 1. List1 is a database table(assume each row simply have one item(key) that is a string), List2 is a large file. 2. Both lists are from 2 db tables. 3. both lists are from two files. in case 2, I plan to use: select a.item from MyTable a where a.item not in (select b.item form MyTable b) this clearly is inefficient, is there a better way? Another approach is: I plan to sort each list, and then walk down both of them to find the diff. If the list is from a file, I have to read it into a db table first, then use db sorting to output the list. Is the run time complexity still O(nlogn) in db sorting? either approach is a pain and seems would be very slow when the list involved has hundreds of millions of items. any suggestions?

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >