Search Results

Search found 47392 results on 1896 pages for 'full text indexing'.

Page 94/1896 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Typecast to an int in Octave/Matlab

    - by Leif Andersen
    I need to call the index of a matrix made using the linspace command, and based on somde data taken from an oscilloscope. Because of this, the data inputed is a double. However, I can't really call: Time[V0Found] where V0Found is something like 5.2 however, taking index 5 is close enough, so I need to drop the decimal. I used this equation to drop the decimal: V0FoundDec = V0Found - mod(V0Found,1) Time[V0FoundDec] However, eve though that drops the decimal, octave still complains about it. So, what can I do to typecast it to an int? Thank you.

    Read the article

  • Createing a new Index in SQL when current records don't meet that index

    - by Jonathan
    Hey all- I'd like to add an index to a table that already contains data. I know that there a few records currently in the table that are not unique with this new index. Clearly, MySQL won't let me add the index until all of them are. I need a query to identify the rows which currently have the same index. I can then delete or modify these rows as necessary. The new index contains 6 fields. Thanks- Jonathan

    Read the article

  • SolrException: Internal Server Error

    - by Priya
    Hi All, I am working on Solr in my application. I am using apache-solr-solrj-1.4.0.jar When I try to call add(SolrInputDocument doc) of CommonsHttpSolrServer I am getting following exception: org.apache.solr.common.SolrException: Internal Server Error Internal Server Error at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:424) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:64) Can anyone please help me to resolve this problem? following are attributes in solrconfig.xml: native false true

    Read the article

  • Adding # & search sign to TableIndex in UITableView

    - by sagar
    In iPhone native Phone book - there is a search character at the top & # character at the bottom. I want to add both of that character in my table Index. Currently I have implemented following code. atoz=[[NSMutableArray alloc] init]; for(int i=0;i<26;i++){ [atoz addObject:[NSString stringWithFormat:@"%c",i+65]]; } - (NSArray *)sectionIndexTitlesForTableView:(UITableView *)tableView{ return atoz; } How to have # character & search symbol in my UITableView? Thanks in advance for sharing your knowledge. Sagar

    Read the article

  • What Algorithm will Find New Longtail Keywords for *keyword* in PPC

    - by Becci
    I am looking for the algorithm (or combo) that would allow someone to find new longtail PPC search phrases based on say one corekeyword. Eg #1 word word corekeyword eg #2 word corekeyword word Google search tool allows a limited number vertically - mostly of eg#1 (https://adwords.google.com.au/select/KeywordToolExternal) I also know of other PPC apps that allow more volume than google adwords keyword tool, But I want to find other combos that mention the corekeyword & then naturally sort for the highest volume searched. Working example of exact match: corekeyword: copywriter (40,500 searches a month) google will serve up: become a copywriter (480 searches globally/month in english) But if I specifically look up: How to become a copywriter (720 searches a month) This exact longtail keyword phrase has 300 more searches than the 3 word version spat out by google. I want the algorithm to find any other highly search exact longtials like: how to become a copywriter Simply because it was save significant $ finding other longtail keywords after your campaign has been running an made google lots of money. I don't want a concantenation algorithm (I already have one of those), because hypothetically, I don't know what keywords will be that I want to find. Any gurus out there? Becci

    Read the article

  • PHP library for keeping your site Indexed by Google Bing etc

    - by Ole Jak
    I need some library which would be able to keep my urls Indexed and described. So I want to say to it something like Index this new url "www.bla-bla.com/new_url" with some key words or something like that. And I want to be soure that If I told my lib about my new URL Google and others will 100% find it As soon as possible and people will be able to find this URL on the web. Do you know any such libs?

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • Computing unique index for every poker starting hand

    - by Aly
    As there are 52 cards in a deck we know there are 52 choose 2 = 1326 distinct matchups, however in preflop poker this can be bucketed into 169 different hands such as AK offsuit and AK suited as whether it is A hearts K hearts or A spade K spades it makes no difference preflop. My question is, is there a nice mathematical property in which I can uniquely index each of these 169 hands (from 0 to 168 preferably). I am trying to create a look up table as a double[][] = new double [169][169] but have no way of changing a hand representation such as AKs (an Ace and a King of the same suit) to a unique index in this array.

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Implement date picker and time picker in button click and store in edit text boxes

    - by user3597791
    Hi I am trying to implement a date picker and time picker in button click and store in edit text boxes. I have tried numerous things but since i suck at coding I cant get any of them to work. Please find my class and xml below and i would be grateful for any help import android.app.Activity; import android.content.Intent; import android.database.Cursor; import android.graphics.BitmapFactory; import android.net.Uri; import android.os.Bundle; import android.provider.MediaStore; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.ImageView; import android.widget.Toast; public class NewEvent extends Activity { private static int RESULT_LOAD_IMAGE = 1; private EventHandler handler; private String picturePath = ""; private String name; private String place; private String date; private String time; private String photograph; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.new_event); handler = new EventHandler(getApplicationContext()); ImageView iv_user_photo = (ImageView) findViewById(R.id.iv_user_photo); iv_user_photo.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); startActivityForResult(intent, RESULT_LOAD_IMAGE); } }); Button btn_add = (Button) findViewById(R.id.btn_add); btn_add.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { EditText et_name = (EditText) findViewById(R.id.et_name); name = et_name.getText().toString(); EditText et_place = (EditText) findViewById(R.id.et_place); place = et_place.getText().toString(); EditText et_date = (EditText) findViewById(R.id.et_date); date = et_date.getText().toString(); EditText et_time = (EditText) findViewById(R.id.et_time); time = et_time.getText().toString(); ImageView iv_photograph = (ImageView) findViewById(R.id.iv_user_photo); photograph = picturePath; Event event = new Event(); event.setName(name); event.setPlace(place); event.setDate(date); event.setTime(time); event.setPhotograph(photograph); Boolean added = handler.addEventDetails(event); if(added){ Intent intent = new Intent(NewEvent.this, MainEvent.class); startActivity(intent); }else{ Toast.makeText(getApplicationContext(), "Event data not added. Please try again", Toast.LENGTH_LONG).show(); } } }); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) { Uri imageUri = data.getData(); String[] filePathColumn = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(imageUri, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); picturePath = cursor.getString(columnIndex); cursor.close(); Here is my xml: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:padding="10dp"> <TextView android:id="@+id/tv_new_event_title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Add New Event" android:textAppearance="?android:attr/textAppearanceLarge" android:layout_alignParentTop="true" /> <Button android:id="@+id/btn_add" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Add Event" android:layout_alignParentBottom="true" /> <ScrollView android:layout_width="match_parent" android:layout_height="match_parent" android:layout_below="@id/tv_new_event_title" android:layout_above="@id/btn_add"> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="vertical"> <ImageView android:id="@+id/iv_user_photo" android:src="@drawable/add_user_icon" android:layout_width="100dp" android:layout_height="100dp"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Event:" /> <EditText android:id="@+id/et_name" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="text" > <requestFocus /> </EditText> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Place:" /> <EditText android:id="@+id/et_place" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="text" > <requestFocus /> </EditText> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Date:" /> <EditText android:id="@+id/et_date" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="date" /> <Button android:id="@+id/button" style="?android:attr/buttonStyleSmall" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" /> <requestFocus /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Time:" /> <EditText android:id="@+id/et_time" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="time" /> <Button android:id="@+id/button1" style="?android:attr/buttonStyleSmall" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button1" /> <requestFocus /> </LinearLayout> </ScrollView> </RelativeLayout>

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • Lucene and Special Characters

    - by Brandon
    I am using Lucene.Net 2.0 to index some fields from a database table. One of the fields is a 'Name' field which allows special characters. When I perform a search, it does not find my document that contains a term with special characters. I index my field as such: Directory DALDirectory = FSDirectory.GetDirectory(@"C:\Indexes\Name", false); Analyzer analyzer = new StandardAnalyzer(); IndexWriter indexWriter = new IndexWriter(DALDirectory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); doc.Add(new Field("Name", "Test (Test)", Field.Store.YES, Field.Index.TOKENIZED)); indexWriter.AddDocument(doc); indexWriter.Optimize(); indexWriter.Close(); And I search doing the following: value = value.Trim().ToLower(); value = QueryParser.Escape(value); Query searchQuery = new TermQuery(new Term(field, value)); Searcher searcher = new IndexSearcher(DALDirectory); TopDocCollector collector = new TopDocCollector(searcher.MaxDoc()); searcher.Search(searchQuery, collector); ScoreDoc[] hits = collector.TopDocs().scoreDocs; If I perform a search for field as 'Name' and value as 'Test', it finds the document. If I perform the same search as 'Name' and value as 'Test (Test)', then it does not find the document. Even more strange, if I remove the QueryParser.Escape line do a search for a GUID (which, of course, contains hyphens) it finds documents where the GUID value matches, but performing the same search with the value as 'Test (Test)' still yields no results. I am unsure what I am doing wrong. I am using the QueryParser.Escape method to escape the special characters and am storing the field and searching by the Lucene.Net's examples. Any thoughts?

    Read the article

  • Enforcing a query in MySql to use a specific index

    - by Hossein
    Hi, I have large table. consisting of only 3 columns (id(INT),bookmarkID(INT),tagID(INT)).I have two BTREE indexes one for each bookmarkID and tagID columns.This table has about 21 Million records. I am trying to run this query: SELECT bookmarkID,COUNT(bookmarkID) AS count FROM bookmark_tag_map GROUP BY tagID,bookmarkID HAVING tagID IN (-----"tagIDList"-----) AND count >= N which takes ages to return the results.I read somewhere that if make an index in which it has tagID,bookmarkID together, i will get a much faster result. I created the index after some time. Tried the query again, but it seems that this query is not using the new index that I have made.I ran EXPLAIN and saw that it is actually true. My question now is that how I can enforce a query to use a specific index? also comments on other ways to make the query faster are welcome. Thanks

    Read the article

  • Entity Framework use of indexes and foreign keys

    - by David
    I want to be able to search a table quite quickly using the Entity Framework, say if I have a Contacts table, a JobsToDo table and a matrix table linking the two tables e.g Contacts_JobsToDo_Mtx and I specify two foreign keys in the Contacts_JobsToDo_Mtx table, if I wanted to search this Mtx table, do I need to specify an index on the two foreign keys? Or by the fact that they are two foreign keys are they considered indexed on them anyway? Will the Entity Framework be able to search through the Mtx table quickly without having to specfiy an index on both keys? Thanks!

    Read the article

  • Why does Sql Server recommends creating an index when it already exist?

    - by Pierre-Alain Vigeant
    I ran a very basic query against one of our table and I noticed that the execution plan query processor is recommending that we create an index on a column The query is SELECT SUM(DATALENGTH(Data)) FROM Item WHERE Namespace = 'http://some_url/some_namespace/' After running, I get the following message // The Query Processor estimates that implementing the following index could improve the query cost by 96.7211%. CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[Item] ([Namespace]) My problem is that I already have such index on that column: CREATE NONCLUSTERED INDEX [IX_ItemNamespace] ON [dbo].[Item] ( [Namespace] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] Why is Sql Server recommending me to create such index when it already exist?

    Read the article

  • Choice of primary index for mysql innoDB

    - by Saif Bechan
    I have an auction website where users can place a bid on a product. Now i have a primary index on the bid table for easy access of the last places bid on the product. This index is just a unique auto incrementing value. During the week this number becomes huge!! I was wondering if this is a good setup for the primary key in an innoDB table. The bids table exist of the following important fields: table: bids fields: user_id,product_id,bid So what i want to do is make the primary of these 3 fields combined. Is this a good idea or is this just too much for innoDB keys.

    Read the article

  • SQL Overlapping and Multi-Column Indexes

    - by durilai
    I am attempting to tune some stored procedures and have a question on indexes. I have used the tuning advisor and they recommended two indexes, both for the same table. The issue is one index is for one column and the other is for multiple columns, of which it includes the same column from the first. My question is why and what is the difference? CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23_K17_K13_K12_K2_K10_K22_K14_K19_K20_K9_K11_5_6_7_15_18] ON [dbo].[Table1] ( [EfctvEndDate] ASC, [StuLangCodeKey] ASC, [StuBirCntryCodeKey] ASC, [StuBirStOrProvncCodeKey] ASC, [StuKey] ASC, [GndrCodeKey] ASC, [EfctvStartDate] ASC, [StuHspncEnctyIndctr] ASC, [StuEnctyMsngIndctr] ASC, [StuRaceMsngIndctr] ASC, [StuBirDate] ASC, [StuBirCityName] ASC ) INCLUDE ( [StuFstNameLgl], [StuLastOrSrnmLgl], [StuMdlNameLgl], [StuIneligSnorImgrntIndctr], [StuExpctdGrdtngClYear] ) WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23] ON [dbo].[Table1] ( [EfctvEndDate] ASC )WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • What are some best practises and "rules of thumb" for creating database indexes?

    - by Ash
    I have an app, which cycles through a huge number of records in a database table and performs a number of SQL and .Net operations on records within that database (currently I am using Castle.ActiveRecord on PostgreSQL). I added some basic btree indexes on a couple of the feilds, and as you would expect, the peformance of the SQL operations increased substantially. Wanting to make the most of dbms performance I want to make some better educated choices about what I should index on all my projects. I understand that there is a detrement to performance when doing inserts (as the database needs to update the index, as well as the data), but what suggestions and best practices should I consider with creating database indexes? How do I best select the feilds/combination of fields for a set of database indexes (rules of thumb)? Also, how do I best select which index to use as a clustered index? And when it comes to the access method, under what conditions should I use a btree over a hash or a gist or a gin (what are they anyway?).

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >