Search Results

Search found 38660 results on 1547 pages for 'sql index'.

Page 482/1547 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Vector.erase(Iterator) causes bad memory access

    - by xon1c
    Hi, I am trying to do a Z-Index reordering of videoObjects stored in a vector. The plan is to identify the videoObject which is going to be put on the first position of the vector, erase it and then insert it at the first position. Unfortunately the erase() function always causes bad memory access. Here is my code: testApp.h: vector<videoObject> videoObjects; vector<videoObject>::iterator itVid; testApp.cpp: // Get the videoObject which relates to the user event for(itVid = videoObjects.begin(); itVid != videoObjects.end(); ++itVid){ if(videoObjects.at(itVid - videoObjects.begin()).isInside(ofPoint(tcur.getX(), tcur.getY()))){ videoObjects.erase(itVid); } } This should be so simple but I just don't see where I'm taking the wrong turn. Thx, xonic

    Read the article

  • fastest way to check to see if a certain index in a linq statement is null

    - by tehdoommarine
    Basic Details I have a linq statement that grabs some records from a database and puts them in a System.Linq.Enumerable: var someRecords = someRepoAttachedToDatabase.Where(p=>true); Suppose this grabs tons (25k+) of records, and i need to perform operations on all of them. to speed things up, I have to decided to use paging and perform the operations needed in blocks of 100 instead of all of the records at the same time. The Question The line in question is the line where I count the number of records in the subset to see if we are on the last page; if the number of records in subset is less than the size of paging - then that means there are no more records left. What I would like to know is what is the fastest way to do this? Code in Question int pageSize = 100; bool moreData = true; int currentPage = 1; while (moreData) { var subsetOfRecords = someRecords.Skip((currentPage - 1) * pageSize).Take(pageSize); //this is also a System.Linq.Enumerable if (subsetOfRecords.Count() < pageSize){ moreData = false;} //line in question //do stuff to records in subset currentPage++; } Things I Have Considered subsetOfRecords.Count() < pageSize subsetOfRecords.ElementAt(pageSize - 1) == null (causes out of bounds exception - can catch exception and set moreData to false there) Converting subsetOfRecords to an array (converting someRecords to an array will not work due to the way subsetOfRecords is declared - but I am open to changing it) I'm sure there are plenty of other ideas that I have missed.

    Read the article

  • What's SQL table name for table between 'Users' and 'UserTypes' ?

    - by Space Cracker
    i have tow tables in my database : Users : contain user information UserTypes : contain the names of user types ( student , teacher , specialist ) - I can't rename it to 'Types' as we have a table with this name relation between Users and UserTypes many to many .. so i'll create a table that have UserID(FK) with UserTypeID(FK) but I try to find best name for that table ... any suggestion please ?

    Read the article

  • How can I cover up a <div>'s border with another <div>?

    - by Leftium
    I am trying to move GCLI's input from the bottom of the page to the top. However, I can't get it to look quite right. Why can't I get the .gcli-panel-connector <div> to cover up the .gcli-tt <div>'s border? I think it has something to do with the z-index. Please compare these two live demos: Original code where the input is at the bottom and the border is properly covered Altered code where the input is at the top, but the border is not covered as desired

    Read the article

  • How optimize queries with fully qualified names in t-sql?

    - by tomaszs
    Whe I call: select * from Database.dbo.Table where NAME = 'cat' It takes: 200 ms And when I change database to Database in Management Studio and call it without fully qualified name it's much faster: select * from Table where NAME = 'cat' It takes: 17 ms Is there any way to make fully qualified queries faster without changing database?

    Read the article

  • SQL: Using a CASE Statement to update a 1000 rows at once, how??

    - by SoLoGHoST
    Ok, I would like to use a CASE STATEMENT for this, but I am lost with this. Basically, I need to update a ton of rows, but just on the "position" column. I need to update all "position" values from 0 - count(position) for each id_layout_position column per id_layout column. Here's what I got for a regular update, but I don't wanna throw this into a foreach loop, as it would take forever to do it. I'm using SMF (Simple Machines Forums), so it might look a little different, but the idea is the same, and CASE statements are supported... $smcFunc['db_query']('', ' UPDATE {db_prefix}dp_positions SET position = {int:position} WHERE id_layout_position = {int:id_layout_position} AND id_layout = {int:id_layout}', array( 'position' => $position++, 'id_layout_position' => (int) $id_layout_position, 'id_layout' => (int) $id_layout, ) ); Anyways, I need to apply some sort of CASE on this so that I can auto-increment by 1 all values that it finds and update to the next possible value. I know I'm doing this wrong, even in this QUERY. But I'm totally lost when it comes to CASES. Here's an example of a CASE being used within SMF, so you can see this and hopefully relate: $conditions = ''; foreach ($postgroups as $id => $min_posts) { $conditions .= ' WHEN posts >= ' . $min_posts . (!empty($lastMin) ? ' AND posts <= ' . $lastMin : '') . ' THEN ' . $id; $lastMin = $min_posts; } // A big fat CASE WHEN... END is faster than a zillion UPDATE's ;). $smcFunc['db_query']('', ' UPDATE {db_prefix}members SET id_post_group = CASE ' . $conditions . ' ELSE 0 END' . ($parameter1 != null ? ' WHERE ' . (is_array($parameter1) ? 'id_member IN ({array_int:members})' : 'id_member = {int:members}') : ''), array( 'members' => $parameter1, ) ); Before I do the update, I actually have a SELECT which throws everything I need into arrays like so: $disabled_sections = array(); $positions = array(); while ($row = $smcFunc['db_fetch_assoc']($request)) { if (!isset($disabled_sections[$row['id_group']][$row['id_layout']])) $disabled_sections[$row['id_group']][$row['id_layout']] = array( 'info' => $module_info[$name], 'id_layout_position' => $row['id_layout_position'] ); // Increment the positions... if (!is_null($row['position'])) { if (!isset($positions[$row['id_layout']][$row['id_layout_position']])) $positions[$row['id_layout']][$row['id_layout_position']] = 1; else $positions[$row['id_layout']][$row['id_layout_position']]++; } else $positions[$row['id_layout']][$row['id_layout_position']] = 0; } Thanks, I know if anyone can help me here it's definitely you guys and gals... Anyways, here is my question: How do I use a CASE statement in the first code example, so that I can update all of the rows in the position column from 0 - total # of rows found, that have that id_layout value and that id_layout_position value, and continue this for all different id_layout values in that table? Can I use the arrays above somehow? I'm sure I'll have to use the id_layout and id_layout_position values for this right? But how can I do this?

    Read the article

  • How should I do this (business logic) in Sql Server? A constraint?

    - by Pure.Krome
    Hi folks, I wish to add some type of business logic constraint to a table, but not sure how / where. I have a table with the following fields. ID INTEGER IDENTITY HubId INTEGER CategoryId INTEGER IsFeatured BIT Foo NVARCHAR(200) etc. So what i wish is that you can only have one featured thingy, per articleId + hubId. eg. 1, 1, 1, 1, 'blah' -- Ok. 2, 1, 2, 1, 'more blah' -- Also Ok 3, 1, 1, 1, 'aaa' -- constraint error 4, 1, 1, 0, 'asdasdad' -- Ok. 5, 1, 1, 0, 'bbbb' -- Ok. etc. so the third row to be inserterd would fail because that hub AND category already have a featured thingy. Is this possible?

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • Scala create xhtml elements dynamically

    - by portoalet
    Given a String array val www = List("http://bloomberg.com", "http://marketwatch.com"); I want to dynamically generate <span id="span1">http://bloomberg.com</span> <span id="span2">http://marketwatch.com</span> def genSpan(web: String) = <span id="span1"> + web + </span>; www.map(genSpan); // How can I pass the loop index? How can I use scala map function to generate the ids (span1, span2), as 1 and 2 are the loop indexes ? Or is the only way is to use for comprehension?

    Read the article

  • JavaScript for loop index strangeness

    - by pythonBOI
    I'm relatively new to JS so this may be a common problem, but I noticed something strange when dealing with for loops and the onclick function. I was able to replicate the problem with this code: <html> <head> <script type="text/javascript"> window.onload = function () { var buttons = document.getElementsByTagName('a'); for (var i=0; i<2; i++) { buttons[i].onclick = function () { alert(i); return false; } } } </script> </head> <body> <a href="">hi</a> <br /> <a href="">bye</a> </body> </html> When clicking the links I would expect to get '0' and '1', but instead I get '2' for both of them. Why is this? BTW, I managed to solve my particular problem by using the 'this' keyword, but I'm still curious as to what is behind this behavior.

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • What is wrong with my SQL syntax for an UPDATE with a JOIN?

    - by Phil H
    I have two tables, related by a common key. So TableA has key AID and value Name and TableB has keys AID, BID and values Name, Value: AID Name 74 Alpha AID BID Name Value 74 4 Beta Brilliance I would like to update the TableB Value here from Brilliance to Barmy, using just the Name fields. I thought I could do it via an UPDATE containing a JOIN, but Access (I know...) is complaining with 'Syntax error (missing operator) in query expression ' and then everything from 'Barmy' here: UPDATE tB SET tB.BValue='Barmy' FROM TableB tB INNER JOIN TableA tA ON tB.AID=tA.AID WHERE tB.Name='Beta' AND tA.Name='Alpha'; What is my heinous crime? Or is it just Access not conforming?

    Read the article

  • Why did the following linq to sql query generate a subquery?

    - by Xaisoft
    I did the following query: var list = from book in books where book.price > 50 select book; list = list.Take(50); I would expect the above to generate something like: SELECT top 50 id, title, price, author FROM Books WHERE price > 50 but it generates: SELECT [Limit1].[C1] as [C1] [Limit1].[id] as [Id], [Limit1].[title] as [title], [Limit1].[price] as [price], [Limit1].[author] FROM (SELECT TOP (50) [Extent1].[id] as as [Id], [Extent1].[title] as [title], [Extent1].[price] as [price], [Extent1].[author] as [author] FROM Books as [Extent1] WHERE [Extent1].[price] > 50 ) AS [Limit1] Why does the above linq query generate a subquery and where does the C1 come from?

    Read the article

  • How can I get a list of modified records from a SQL Server database?

    - by Pixelfish
    I am currently in the process of revamping my company's management system to run a little more lean in terms of network traffic. Right now I'm trying to figure out an effective way to query only the records that have been modified (by any user) since the last time I asked. When the application starts it loads the job information and caches it locally like the following: SELECT * FROM jobs. I am writing out the date/time a record was modified ala UPDATE jobs SET Widgets=@Widgets, LastModified=GetDate() WHERE JobID=@JobID. When any user requests the list of jobs I query all records that have been modified since the last time I requested the list like the following: SELECT * FROM jobs WHERE LastModified>=@LastRequested and store the date/time of the request to pass in as @LastRequest when the user asks again. In theory this will return only the records that have been modified since the last request. The issue I'm running into is when the user's date/time is not quite in sync with the server's date/time and also of server load when querying an un-indexed date/time column. Is there a more effective system then querying date/time information?

    Read the article

  • Why is selecting specified columns, and all, wrong in Oracle SQL?

    - by TomatoSandwich
    Say I have a select statement that goes.. select * from animals That gives a a query result of all the columns in the table. Now, if the 42nd column of the table animals is is_parent, and I want to return that in my results, just after gender, so I can see it more easily. But I also want all the other columns. select is_parent, * from animals This returns ORA-00936: missing expression. The same statement will work fine in Sybase, and I know that you need to add a table alias to the animals table to get it to work ( select is_parent, a.* from animals ani), but why must Oracle need a table alias to be able to work out the select?

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Is there a standard SQL Table design for overriding 'big picture' default values with lower level de

    - by RichardHowells
    Here's an example. Suppose we are trying to calculate a service charge. Say sales in the USA attract a 10 dollar charge, sales in the UK attract a 20 dollar charge So far it's easy - we are starting to imagine a table that lists charges by country. Now lets assume that Alaska and Hawaii are treated as special cases they are both 15 dollars That suggests a table with states, Alaska and Hawaii are charged at 15, but presumably we need 48 (redundant) rows all saying 10. This gives us a maintainance problem, our user only wants to type 10 once NOT 48 times. It does not sit well with the UK either. The UK does not have states. Suppose we throw in another couple of cross cutting rules. If you order by phone there is a 10% supplement on the charge. If you order via the web there is a 10% discount. But for some reason best known to the owners of the business the web/phone supplement/discount are not applied in Hawaii. It seems to me that this is quite a common kind of problem and there is probably a well known arrangement of tables to store the data. Most cases get handled by broad brush answers, but there are some very detailed low level variations that give rise to a huge number of theoretical combinations, most of which are not used.

    Read the article

  • Read and write .NET Objects in SQL Database without serialization.

    - by Mohit
    Hello, I have a small query. I need to create a Caching Service of my own that will write and read .NET Objects to and from the Database. Now, I have achieved that with the help of Binary Serialization. But the Problem is I need to deliberately marked my objects as [Serializable], which makes me think that what if someone will try to add an object which is not marked as [Serializable]. Thus, I need to find a way to read and write Objects to Database without Serialization. I have one thought too.. As we all know Session can store any object in it. Now, we can make sessions to be stored in the DB, outproc. What mechanism it uses to store these objects without serializing or deserializing. Any help will be highly appreciated. Thanks. M.B

    Read the article

  • css position question: layers with z-index?

    - by user239831
    hey guys, i have a probably rather simpel problem: my website has two layers: 1) a drag&drop navigation on top which should be positioned absolute, so scrolling doesn't affect the bars. 2) a content area in the back behind the navigation which should be scrollable. you can see what i mean right here: http://jsfiddle.net/Pghqv/ however now, i cannot click links in my content-area in the back. any ideas or solutions how i can still have the same position result and the links in the back are working? thank you very much.

    Read the article

  • Why does my SQL database not work in Android?

    - by user1426967
    In my app a click on an image button brings you to the gallery. After clicking on an image I want to call onActivityResult to store the image path. But it does not work. In my LogCat it always tells me that it crashes when it tries to save the image path. Can you find the problem? My onActivityResult method: mImageRowId = savedInstanceState != null ? savedInstanceState.getLong(ImageAdapter.KEY_ROWID) : null; @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if(requestCode == PICK_FROM_FILE && data != null && data.getData() != null) { Uri uri = data.getData(); if(uri != null) { Cursor cursor = getContentResolver().query(uri, new String[] { android.provider.MediaStore.Images.ImageColumns.DATA}, null, null, null); cursor.moveToFirst(); String image = cursor.getString(0); cursor.close(); if(image != null) { // HERE IS WHERE I WANT TO SAVE THE IMAGE. HERE MUST BE THE ERROR! if (mImageRowId == null) { long id = mImageHelper.createImage(image); if (id > 0) { mImageRowId = id; } } // Set the image and display it in the edit activity Bitmap bitmap = BitmapFactory.decodeFile(image); mImageButton.setImageBitmap(bitmap); } } } } This is my onSaveInstanceState method: private static final Long DEF_ROW_ID = 0L; @Override protected void onSaveInstanceState(Bundle outState) { outState.putLong(ImageAdapter.KEY_ROWID, mImageRowId != null ? mImageRowId : DEF_ROW_ID); super.onSaveInstanceState(outState); } This is a part from my DbAdapter: public long createImage(String image) { ContentValues cv = new ContentValues(); cv.put(KEY_IMAGE, image); return mImageDb.insert(DATABASE_TABLE, null, cv); }

    Read the article

  • Dynamic SQL Rows & Columns...cells require subsequent query. Best approach?

    - by Pyrrhonist
    I have the following tables below City --------- CityID StateID Name Description Reports --------- ReportID HeaderID FooterID Description I’m trying to generate a grid for use in a .Net control (Gridview, Listview…separate issue about which will be the ‘best’ one to use for my purposes) which will assign the reports as the columns and the cities as the rows. Which cities get displayed is based on the state selected, and is easy enough SELECT * FROM CITIES WHERE STATEID=@StateID However, the user is able to select which reports are being generated for each City (Demographics, Sales, Land Area, etc.). Further, the resultant cells (City * Report) is a sub-query on different tables based on the city selected and the report. Ie. Column Sales selected yields SELECT * FROM SALES WHERE CITYID=@CityID I’ve programmed a VERY inelegant solution using multiple queries and brute-forcing the grid to be created (line by line, row by row creation of data elements), but I’m positive there’s got to be a better way of accomplishing this…? Any / all suggestions appreciated here as the brute force approach I’ve gotten is slow and cumbersome…and this will have to be used often by the client, so I’m not sure it’ll be acceptable in it’s current implementation.

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >