Search Results

Search found 38660 results on 1547 pages for 'sql index'.

Page 580/1547 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • Need some serious help with self join issue.

    - by kralco626
    Well as you may know, you cannot index a view with a self join. Well actually even two joins of the same table, even if it's not technically a self join. A couple of guys from microsoft came up with a work around. But it's so complicated I don't understand it!!! The solution to the problem is here: http://jmkehayias.blogspot.com/2008/12/creating-indexed-view-with-self-join.html The view I want to apply this work around to is: create VIEW vw_lookup_test WITH SCHEMABINDING AS select count_big(*) as [count_all], awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm [owner], t.circt_cstdn_nm [tech], dvc.circt_nm, data_orgtn_yr from ((dbo.dvc join dbo.circt on dvc.circt_nm = circt.circt_nm) join dbo.circt_cstdn o on circt.circt_cstdn_user_id = o.circt_cstdn_user_id) join dbo.circt_cstdn t on dvc.circt_cstdn_user_id = t.circt_cstdn_user_id group by awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm, t.circt_cstdn_nm, dvc.circt_nm, data_orgtn_yr go Any help would be greatly apreciated!!! Thanks so much in advance!

    Read the article

  • Create link on page w/ web address stored in database

    - by RememberME
    This seems like it should be easy, but I can't seem to figure it out. All of my google searches lead me to linking to databases which isn't what I want to do. I'm a complete web development newb. I've roughly followed the NerdDinner tutorial in creating my web app. One of my stored fields is a web address. On the Index and Details pages, when I display the info from my record, I want the web address to be a clickable link to the website. It's currently displayed as: <%= Html.Encode(Model.Subcontract.company1.website) %>

    Read the article

  • How do I get the position of a result in the list after an order_by?

    - by Bob Bob
    I'm trying to find an efficient way to find the rank of an object in the database related to it's score. My naive solution looks like this: rank = 0 for q in Model.objects.all().order_by('score'): if q.name == 'searching_for_this' return rank rank += 1 It should be possible to get the database to do the filtering, using order_by: Model.objects.all().order_by('score').filter(name='searching_for_this') But there doesn't seem to be a way to retrieve the index for the order_by step after the filter. Is there a better way to do this? (Using python/django and/or raw SQL.) My next thought is to pre-compute ranks on insert but that seems messy.

    Read the article

  • Model Binding to a List using non-sequential indexes. Can I access the index later?

    - by Kid A
    I'm following Phil's great tutorial on model binding to a list. I use input names like this: book[5804].title book[5804].author book[1234].title book[1234].author This works well and the data gets back to the model just fine, populating a list of books. What I'm looking for is a way to get access in the model to the index that was used to send the books. I'd like to get that number, "5804." This is because the index is of semantic importance. If I can access it, it saves me from setting another property on the object (book ID). Is there a way to see, either on the FormCollection or on the model after UpdateModel is called, what the index was when it was sent up?

    Read the article

  • I am not able to drop foreign key in mysql Error 150. Please help

    - by Shantanu Gupta
    i am trying to create a foreign key in my table. But when i executes my query it shows me error 150 Error Code : 1005 Can't create table '.\vts#sql-6ec_1.frm' (errno: 150) (0 ms taken) My Queries are Query to create a foreign Key alter table `vts`.`tblguardian` add constraint `FK_tblguardian` FOREIGN KEY (`GuardianPickPointId`) REFERENCES `tblpickpoint` (`PickPointId`) EDIT: Now I am trying to drop this constraint But it fails again and shows me same error as it was giving when i was trying to create foreign key. alter table `vts`.`tblguardian` drop index `FK_tblguardian` Primary Key table CREATE TABLE `tblpickpoint` ( `PickPointId` int(4) NOT NULL auto_increment, `PickPointName` varchar(500) default NULL, `PickPointLabel` varchar(500) default NULL, `PickPointLatLong` varchar(100) NOT NULL, PRIMARY KEY (`PickPointId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC Foreign Key Table CREATE TABLE `tblguardian` ( `GuardianId` int(4) NOT NULL auto_increment, `GuardianName` varchar(500) default NULL, `GuardianAddress` varchar(500) default NULL, `GuardianMobilePrimary` varchar(15) NOT NULL, `GuardianMobileSecondary` varchar(15) default NULL, `GuardianPickPointId` int(4) default NULL, PRIMARY KEY (`GuardianId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

    Read the article

  • (N)Hibernate: deleting orphaned ternary association rows when either associated row is deleted.

    - by anthony
    I have a ternary association table created using the following mapping: <map name="Associations" table="FooToBar"> <key column="Foo_id"/> <index-many-to-many class="Bar" column="Bar_id"/> <element column="AssociationValue" /> </map> I have 3 tables, Foo, Bar, and FooToBar. When I delete a row from the Foo table, the associated row (or rows) in FooToBar is automatically deleted. This is good. When I delete a row from the Bar table, the associated row (or rows) in FooToBar remain, with a stale reference to a Bar id that no longer exists. This is bad. How can I modify my hbm.xml to remove stale FooToBar rows when deleting from the Bar table?

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • Shrink database after removing extra data

    - by Sergey Osypchuk
    We have a need to fit database in 4G in order to use ms sql express edition. I started from 7G database, and found a lot of not needed records, and deleted them. After Shrink database size is 4.6G, and 748MB is free (according to database properties). However, when i execute exec sp_spaceused i am having interesting results: DatabaseName Database_size unallocation space xxxxxx 4726.50 MB 765.42 MB Reserved Data index_size unused 3899472 KB 1608776 KB 1448400 KB 842296 KB Any ideas, how can i bite at least some of this unused space? Also I know table, which occupied it. update: is it worth to try to rebuild table indexes? ALTER INDEX ALL ON Production.Product REBUILD

    Read the article

  • Swap unique indexed column values in database.

    - by Ramesh Soni
    I have a database table and one of the fields (not primary key) is having unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hack I know are: Delete both rows and re-insert them Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?

    Read the article

  • How to index a string like "aaa.bbb.ddd-fff" in Lucene?

    - by user46703
    Hi, I have to index a lot documents that contain reference numbers like "aaa.bbb.ddd-fff". The structure can change but it's always some arbitrary numbers or characters combined with "/","-","_" or some other delimiter. The users want to be able to search for any of the substrings like "aaa" or "ddd" and also for combinations like "aaa.bbb" or "ddd-fff". The best I have been able to come up with is to create my own token filter modeled after the synonym filter in "Lucene in action" which spits out multiple terms for each input. In my case I return "aaa.bbb", "bbb.ddd","bbb.ddd-fff" and all other combinations of the substrings. This works pretty well but when I index large documents (100MB) that contain lots of such strings I tend to get out of memory exceptions because my filter returns multiple terms for each input string. Is there a better way to index these strings?

    Read the article

  • Thinking of an Inner Join as a Cross Join and then satisfying some condition(s)?

    - by Jian Lin
    It seems like the safest way to think of an Inner Join is to think of it as a Cross Join and then satisfying some condition(s)? Because the equi-join can be obvious, but the non-equi-join can be a bit confusing. But if we always use the Cross Join, and then filter out the ones satisfying the condition, then we get the resulting table. In other words, we can always analyze it by using the first record on the left table, and then go through every single records on the right, and then repeat that for 2nd record on the left, and for the 3rd, 4th, ... etc. So in our mind, we can analyze it using this way, and it is like O(n^2), although what happens in the DBMS maybe that it is a lot faster (when an index is present). Is there another good way to think of it besides this method?

    Read the article

  • Is SELECT INTO able to affect data from its original table during UPDATE

    - by driveby
    Whilst asking this question asp.net scheduling timed events user murph posted some insightful information: Point about this is that its very, very simple - you have an process for exchange that is performing a clearly defined task and you have a high frequency task that is not doing anything particularly complex, its a straightforward query (select from table where sent = false and send at < value) - probably into temporary table so that you can run a single query update after you've done the sends - that you can optimise the index for. You're not trying to queue up a huge pile of event triggers, just one that fires once a minute and processes things that are due. Is it possible to SELECT data from table X INTO table Y and have the UPDATES that are performed on table Y pushed into table X? I guess the alternative would be that the data gets updated in table Y then an update command can be run on table X based on the data in table Y. What would be the advantage of selecting into another table? Thank you,

    Read the article

  • Troubleshooting MSSQL Connection from PHP

    - by Cory Dee
    I'm trying to connect to an external Sql Server through PHP 5.2. Using this line: $con = mssql_connect('123.123.123.123','Username','Password') or die('Could not connect to the server!'); I'm receiving this error: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: 123.123.123.123 in /home/file/public_html/structure/index.php on line 4 Could not connect to the server! My hosting provider assures me that ports are open for my server to connect to the DB. Looking at my php info, MSSQL Support is enabled, using FreeTDS. Any ideas why this would be failing, or how I can begin trouble shooting the problem?

    Read the article

  • Creating a foreign key in MySQL produces error:

    - by SnOrfus
    I'm trying to create a foreign key on a table in MySQL and I'm getting a strange error that there seems to be little info about in any of my searches. I'm creating the key with this (emitted from mysql workbench 5.2): ALTER TABLE `db`.`appointment` ADD CONSTRAINT `FK_appointment_CancellationID` FOREIGN KEY (`CancellationID` ) REFERENCES `db`.`appointment_cancellation` (`ID` ) ON DELETE NO ACTION ON UPDATE NO ACTION , ADD INDEX `FK_appointment_CancellationID` (`CancellationID` ASC) ; at which point I get the error: ERROR 1452: Cannot add or update a child row: a foreign key constraint fails (alarmtekcore., CONSTRAINT FK_lead_appointment_CancellationID FOREIGN KEY (CancellationID) REFERENCES lead_appointment_cancellation (`) I've checked here but there's no data in the table.

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • Rails - how do you create a user index page like stack overflows with multiple tabs whilst keeping t

    - by adam
    On stackoverflow in the users profile area there are many tabs which all display differing information such as questions asked and graphs. Its the same view though and im wondering hows its best to achieve this in rails whilst keeping the controller skinny and logic in the view to a minimum. def index @user = current_user case params[:tab_selected] when "questions" @data = @user.questions when "answers" @sentences = @user.answers else @sentences = @user.questions end respond_to do |format| format.html # index.html.erb nd end but how do i process this in the index view without a load of if and else statments. And if questions and answers are presented differently whats the best way to go about this.

    Read the article

  • Cake PHP Error: Make sure you have created index.ctp (???)

    - by Louis Stephens
    I was just starting to learn cakephp today by going through a "blog tutorial". I created my blog_controller.php and then created a folder named 'blog' with the apps/views/ structure. The next step in the tutorial was to create the index.ctp file within the blog folder under views. In the tutorial it declares that all error messages should be gone. However, I still receive an error message: Error: The view for BlogController::index() was not found. Error: Confirm you have created the file: /Users/trippstephens/Dropbox/cakephp-cakephp1x-348e5f0/app/views/blog/index.ctp For the life of me, I can not figure out what I have done wrong. I am running cakephp under MAMP and it "installed" successfully. Any help would be appreciated.

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • SQL Compact Edition database corruption

    - by jdv
    Hi, Our product is using MS SQL Compact Edition on a Windows machine (laptop). It's basically a metadata index for files we have on the filesystem. Recently we have seen databases getting corrupted. This happens when the machine is very busy moving files around and has to do a tiny bit of database changes at the same time. I was somewhat shocked that was at all possible. It was my expectation that the database would stay coherent whatever the circumstances. Of course we are doing something wrong. Things we have checked so far are: Use of only one db connection per thread specify the maximum size when opening the database The database is accessed only by one application, a .net based windows service. Are there other gotcha's?

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • sqlite3.OperationalError: database is locked - non-threaded application

    - by James C
    Hi, I have a Python application which throws the standard sqlite3.OperationalError: database is locked error. I have looked around the internet and could not find any solution which worked (please note that there is no multiprocesses/threading going on, and as you can see I have tried raising the timeout parameter). The sqlite file is stored on the local hard drive. The following function is one of many which accesses the sqlite database, and runs fine the first time it is called, but throws the above error the second time it is called (it is called as part of a for loop in another function): def update_index(filepath): path = get_setting('Local', 'web') stat = os.stat(filepath) modified = stat.st_mtime index_file = get_setting('Local', 'index') connection = sqlite3.connect(index_file, 30) cursor = connection.cursor() head, tail = os.path.split(filepath) cursor.execute('UPDATE hwlive SET date=? WHERE path=? AND name=?;', (modified, head, tail)) connection.commit() connection.close() Many thanks.

    Read the article

  • How Can I optimize this RewriteEngine Code?

    - by Lucki Mile
    I have server overload, server admin said that this issue is caused from htaccess file This is the code: RewriteEngine On RewriteBase /here/ RewriteRule ^top/?$ index.php?mode=top [QSA] RewriteRule ^top/video/?$ index.php?mode=top&cat=vids [QSA] RewriteRule ^top/picture/?$ /index.php?mode=top&cat=pics [QSA] RewriteRule ^random$ index.php?mode=random [QSA] RewriteRule ^random/video/?$ index.php?mode=random&cat=vids [QSA] RewriteRule ^random/picture/?$ index.php?mode=random&cat=pics [QSA] RewriteRule ^new/?$ index.php [QSA] RewriteRule ^new/video/?$ index.php?mode=&cat=vids [QSA] RewriteRule ^new/picture/?$ index.php?mode=&cat=pics [QSA] RewriteRule ^video/([0-9]+)_(.*)$ item.php?cat=vids&id=$1 [QSA] RewriteRule ^picture/([0-9]+)_(.*)$ item.php?cat=pics&id=$1 [QSA] ErrorDocument 404 /item.php

    Read the article

  • 100+ tables to joined

    - by deian
    Hi guys, I was wondering if anyone ever had a change to measure how a would 100 joined tables perform? Each table would have an ID column with primary index and all table are 1:1 related. It is a common problem within many data entry applications where we need to collect 1000+ data points. One solution would be to have one big table with 1000+ columns and the alternative would be to split them into multiple tables and join them when it is necessary. So perhaps more real question would be how 30 tables (30 columns each) would behave with multitable join. 500K-1M rows should be the expected size of the tables. Cheers

    Read the article

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • What is a valid and reasonable alternative to a massive storage approach?

    - by Backo
    I am using Ruby on Rails 3.2.2 and MySQL. After my previous question on "how to handle massive storage of records in database for user authorization purposes", since related answers (on how to solve the issue or how to accomplish to that I am looking for) aren't sufficiently detailed or require to much resources (at least for me), I would like to know what are valid and reasonable alternatives to that approach. In few words, this question could be phrase as: how to handle "complex" (at level of SQL querying) user authorizations when you have to fetch "authorized" records? That is, for example, how to retrieve records when you would use code like the following (the following code would be used mostly in index controller actions): Article.readable_by_user(@current_user) # => Returns all articles readable by the current user.

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >