Search Results

Search found 14037 results on 562 pages for 'alter index'.

Page 337/562 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • lightweight search engine for asp.net

    - by Michael
    I'm looking to develop a CMS project based on UMBRACO but I also need to index the documents created and to offer search functionality therefore I would like to know if you have any suggestion for a lightweight search engine available in .net technology. The main requirement is to be simple and efficient (nothing complex like solr or sphinx ).

    Read the article

  • Prism: How to render one view on top of another

    - by Nate Noonen
    We have a Prism/WPF application and are using an expander to animate a menu. When the expander expands, the content is rendered behind the main region's content. The menu is in a different region than the content it is supposed to overlay (since the menu governs what items go into that region) which is why this is occurring. We have tried setting the Z-Index of the ContentControls to no avail.

    Read the article

  • IList<T> and IReadOnlyList<T>

    - by Safak Gür
    My problem is that I have a method that can take a collection as parameter that, Has a Count property Has an integer indexer (get-only) And I don't know what type should this parameter be. I would choose IList<T> before .NET 4.5 since there is no other indexable collection interface for this and arrays implement it, which is a big plus. But .NET 4.5 introduces the new IReadOnlyList<T> interface and I want my method to support that, too. How can I write this method to support both IList<T> and IReadOnlyList<T> without violating the basic principles like DRY? Can I convert IList<T> to IReadOnlyList<T> somehow in an overload? What is the way to go here? Edit: Daniel's answer gave me some pretty ideas, I guess I'll go with this: public void Do<T>(IList<T> collection) { DoInternal(collection, collection.Count, i => collection[i]); } public void Do<T>(IReadOnlyList<T> collection) { DoInternal(collection, collection.Count, i => collection[i]); } private void DoInternal<T>(IEnumerable<T> collection, int count, Func<int, T> indexer) { // Stuff } Or I could just accept a ReadOnlyList<T> and provide an helper like this: public static class CollectionEx { public static IReadOnlyList<T> AsReadOnly<T>(this IList<T> collection) { if (collection == null) throw new ArgumentNullException("collection"); return new ReadOnlyWrapper<T>(collection); } private sealed class ReadOnlyWrapper<T> : IReadOnlyList<T> { private readonly IList<T> _Source; public int Count { get { return _Source.Count; } } public T this[int index] { get { return _Source[index]; } } public ReadOnlyWrapper(IList<T> source) { _Source = source; } public IEnumerator<T> GetEnumerator() { return _Source.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } } } Then I could call Do(array.AsReadOnly())

    Read the article

  • open app in mobile safari browser

    - by How2iphone
    I have been using the uiwebview for a app that consists a index.html,css and javascript files.I'd like to do away with the uiwebview and open the app in the safari browser instead.All source files are located within the app bundle.Is it possible and if so can someone point me in the rite direction.Thanks in advance for any help offered.

    Read the article

  • How to correct this rewrite rule?

    - by Justin John
    I have url as http://www.mydomain.com/levels/home?mode=48bb6e862e54f2a795ffc4e541caed4d. I need to change this url to http://www.mydomain.com/medium. I am not familiar with rewrite url. I tried with RewriteRule ^medium/?$ levels/home?mode=48bb6e862e54f2a795ffc4e541caed4d, but not worked correctly. Full rewrite rule RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^medium/?$ levels/home?mode=48bb6e862e54f2a795ffc4e541caed4d RewriteRule ^(.*)$ index.php [QSA,L]

    Read the article

  • Optimize SELECT DISTINCT CONCAT query in MySQL

    - by L. Cosio
    Hello! I'm running this query: SELECT DISTINCT CONCAT(ALFA_CLAVE, FECHA_NACI) FROM listado GROUP BY ALFA_CLAVE HAVING count(CONCAT(ALFA_CLAVE, FECHA_NACI)) > 1 Is there any way to optimize it? Queries are taking 2-3 hours on a table with 850,000 rows. Adding an index to ALFA_CLAVE and FECHA_NACI would work? Thanks in advanced

    Read the article

  • rewrite rule to skip folder [closed]

    - by redcoder
    RewriteEngine on RewriteBase /tradesalvage/demo RewriteRule ^featured-cars/?$ index.php [L] RewriteRule ^current-stock/?$ carlist.php [L] RewriteRule ^about-us/?$ aboutus.php [L] ErrorDocument 500 /tradesalvage/demo/500.php ErrorDocument 404 /tradesalvage/demo/404.php I have the above rule in .htaccess when access url "http://localhost/tradesalvage/demo/about-us" , it does redirect to aboutus.php file .It also works fine with the rest of the rule. But i have a problem when i create a admin folder.WHen access "http://localhost/tradesalvage/demo/admin/add-data" , it goes to the 404 error page.How do I write the rule to skip the admin folder ?

    Read the article

  • Variable host IP address in iptables rule

    - by DrakeES
    I am running CentOS 6.4 with OpenVZ on my laptop. In order to provide Internet access for the VEs I have to apply the following rule on the laptop: iptables -t nat -A POSTROUTING -j SNAT --to-source <LAPTOP_IP> It works fine. However, I have to work in different places - office, home, partner's office etc. The IP of my laptop is different in those places, so have to alter the rule above each time I change place. I have created a workaround which basically determines the IP and applies the rule: #!/bin/bash IP=$(ifconfig | awk -F':' '/inet addr/&&!/127.0.0.1/{split($2,_," ");print _[1]}') iptables -t nat -A POSTROUTING -j SNAT --to-source $IP The workaround above works. I only still have to execute it manually. Perhaps I could make it a hook executing whenever my laptop obtains an IP address from DHCP - how can I do that? Also, I am just wondering if there is an elegant way of getting it done in the first place - iptables? Maybe there is a syntax allowing to specify "current hardware ip addres" in the rule?

    Read the article

  • Why does my table values return nil when i clearly initialized them?

    - by user3717078
    players = {} function newPlayer(name) players[name]={x = 200, y = 100} --assign each player their x and y coordinates, which is x: 200 and y: 100 end function checkPosition(name?) -- Do i need a parameter? if players[name].x == 200 and players[name].y == 100 then --says players[name].x is a nil value print("good") else print("bad") end end Error: attempt to index ? (a nil value) Current Situation: The code above says players[name].x is a nil value, I would like to know why since i thought i assigned it in the function newPlayer.

    Read the article

  • how to insert Link in to jquery grid column

    - by kumar
    Hello Friends, Can any one tell me how to insert link in to Jquery grid column,, that is I have a column with Edittype { name: 'Comments', index: 'Comments', editable: true, editype: 'textarea', editoptions: { rows: "2", cols: "10"} } I need to insert comments in to this.. so if I have link to click on the column so that some popup bubble will come there in the window to enter comments more like user friendly.. can anybody sujjests me on this.. Thanks

    Read the article

  • What data actually gets cached in InnoDB/MySQL?

    - by ming yeow
    Hi folks, i am trying to optimize performance for my database. My question is - what get cached in the db memory? For example: (table with 2 columns: key (indexed), data (not indexed) updated (not indexed) Select * where updated=20100202 (the db will do a scan - will the scanned rows be kept in memory?) Select * where key = 20 (the db will refer to the index - will the identified rows be kept in memory?)

    Read the article

  • Is it possible to handle such URL

    - by Vitaly
    http://www.mysite.com/http://www.test.com I have tried many different methods using .htaccess with no luck. I need to get that second url coming as parameter. Is it possible to redirect it to index.php and get it as $_SERVER["REQUEST_URI"] or other method? Thanks

    Read the article

  • Complex MySQL table select/join with pre-condition

    - by Howard
    Hello, I have the schema below CREATE TABLE `vocabulary` ( `vid` int(10) unsigned NOT NULL auto_increment, `name` varchar(255), PRIMARY KEY vid (`vid`) ); CREATE TABLE `term` ( `tid` int(10) unsigned NOT NULL auto_increment, `vid` int(10) unsigned NOT NULL default '0', `name` varchar(255), PRIMARY KEY tid (`tid`) ); CREATE TABLE `article` ( `aid` int(10) unsigned NOT NULL auto_increment, `body` text, PRIMARY KEY aid (`aid`) ); CREATE TABLE `article_index` ( `nid` int(10) unsigned NOT NULL default '0', `tid` int(10) unsigned NOT NULL default '0' ) INSERT INTO `vocabulary` values (1, 'vocabulary 1'); INSERT INTO `vocabulary` values (2, 'vocabulary 2'); INSERT INTO `term` values (1, 1, 'term v1 t1'); INSERT INTO `term` values (2, 1, 'term v1 t2 '); INSERT INTO `term` values (3, 2, 'term v2 t3'); INSERT INTO `term` values (4, 2, 'term v2 t4'); INSERT INTO `term` values (5, 2, 'term v2 t5'); INSERT INTO `article` values (1, ""); INSERT INTO `article` values (2, ""); INSERT INTO `article` values (3, ""); INSERT INTO `article` values (4, ""); INSERT INTO `article` values (5, ""); INSERT INTO `article_index` values (1, 1); INSERT INTO `article_index` values (1, 3); INSERT INTO `article_index` values (2, 2); INSERT INTO `article_index` values (3, 1); INSERT INTO `article_index` values (3, 3); INSERT INTO `article_index` values (4, 3); INSERT INTO `article_index` values (5, 3); INSERT INTO `article_index` values (5, 4); Example. Select term of a defiend vocabulary (with non-zero article index), e.g. vid=2 select a.tid, count(*) as article_count from term t JOIN article_index a ON t.tid = a.tid where t.vid = 2 group by t.tid; +-----+---------------+ | tid | article_count | +-----+---------------+ | 3 | 4 | | 4 | 1 | +-----+------------ Question: Select terms a. of a defiend vocabulary (with non-zero article index, e.g. vid=1 = term {1,2}) b. given that those terms are linked with articles which are linked with terms under vid=2, e.g. = {1}, term with tid=2 is excluded since no linkage to terms under vid=2 SQL: Any idea? Expected result: +-----+---------------+ | tid | article_count | +-----+---------------+ | 1 | 2 | +-----+---------------+

    Read the article

  • Indexing over the results returned by selenium

    - by Guy
    Hi I try to index over results returned by an xpath. For example: xpath = '//a[@id="someID"]' can return a few results. I want to get a list of them. I thought that doing: numOfResults = sel.get_xpath_count(xpath) l = [] for i in range(1,numOfResults+1): l.append(sel.get_text('(%s)[%d]'%(xpath, i))) would work because doing something similar with firefox's Xpath checker works: (//a[@id='someID'])[2] returns the 2nd result. Ideas why the behavior would be different and how to do such a thing with selenium Thanks

    Read the article

  • How to return a const QString reference in case of failure?

    - by moala
    Hi, consider the following code: const QString& MyClass::getID(int index) const { if (i < myArraySize && myArray[i]) { return myArray[i]->id; // id is a QString } else { return my_global_empty_qstring; // is a global empty QString } } How can I avoid to have an empty QString without changing the return type of the method? (It seems that returning an empty QString allocated on the stack is a bad idea) Thanks.

    Read the article

  • I need a dictionary-like mapping between characters and other kinds of objects. Which class would be

    - by nullPointerException
    This is in Squeak/Pharo. If I want to have a mapping between Character objects like $a and $b to other kinds of objects, and want to look up those other objects based on the Character, what is the best class to use? Dictionary is an obvious choice, but seems wasteful to be hashing character objects which are basically already numbers. I guess what I want is a kind of array where the character value (number) is used as an index/offset, but I am not sure if this is possible with Unicode.

    Read the article

  • Wordpress upload from localhost to server

    - by raspberry
    I uploaded my wordpress site from my Local host to a folder off my main domain (http://example.com/folder) using this tutorial http://www.webdesignerwall.com/tutorials/exporting-and-importing-wordpress/ (im working on a mac) Everything went ok - admin panel is fine homepage is fine etc - only any page apart from the homepage redirects to this (http://example.com/folder/pagename) except instead of showing the content from that page it shows the unstyled information from the index page of my main root (http://example.com/) What can I do to get this working? Thanks

    Read the article

  • Mixing C and C++, raw pointers and (boost) shared pointers

    - by oompahloompah
    I am working in C++ with some legacy C code. I have a data structure that (during initialisation), makes a copy of the structure pointed to a ptr passed to its initialisation pointer. Here is a simplification of what I am trying to do - hopefully, no important detail has been lost in the "simplification": /* C code */ typedef struct MyData { double * elems; unsigned int len; }; int NEW_mydata(MyData* data, unsigned int len) { // no error checking data->elems = (double *)calloc(len, sizeof(double)); return 0; } typedef struct Foo { MyData data data_; }; void InitFoo(Foo * foo, const MyData * the_data) { //alloc mem etc ... then assign the STRUCTURE foo.data_ = *thedata ; } C++ code ------------- typedef boost::shared_ptr<MyData> MyDataPtr; typedef std::map<std::string, MyDataPtr> Datamap; class FooWrapper { public: FooWrapper(const std::string& key) { MyDataPtr mdp = dmap[key]; InitFoo(&m_foo, const_cast<MyData*>((*mdp.get()))); } ~FooWrapper(); double get_element(unsigned int index ) const { return m_foo.elems[index]; } private: // non copyable, non-assignable FooWrapper(const FooWrapper&); FooWrapper& operator= (const FooWrapper&); Foo m_foo; }; int main(int argc, char *argv[]) { MyData data1, data2; Datamap dmap; NEW_mydata(&data1, 10); data1->elems[0] = static_cast<double>(22/7); NEW_mydata(&data2, 42); data2->elems[0] = static_cast<double>(13/21); boost::shared_ptr d1(&data1), d2(&data2); dmap["data1"] = d1; dmap["data2"] = d2; FooWrapper fw("data1"); //expect 22/7, get something else (random number?) double ret fw.get_element(0); } Essentially, what I want to know is this: Is there any reason why the data retrieved from the map is different from the one stored in the map?

    Read the article

  • jQuery::Ajax success never occurs

    - by Legend
    I have an ajax call in the head section of my index.html $.ajax({ method: 'get', url : 'php/getRecord.php?color=red', dataType: "json", success: function (data) { alert(data); } }); For some reason, that alert never gets called. Am I doing something wrong? The PHP file does give me data when testing it directly.

    Read the article

  • jQuery: click() not working in IE 7

    - by Patrick
    hello, I cannot make the click() function work in IE7, for the tags links on the top of the page in this website: http://www.sanstitre.ch/drupal/portfolio?tid[0]=38 Everything works perfectly in other browsers and, the z-index of the header is bigger than the rest of the content. thanks

    Read the article

  • Invoke web page from Linux C

    - by umetzu
    Hi, i need to get all the HTML TEXT from a url "http://localhost/index.html" to a String variable on C I know that if i put on telnet - telnet www.google.com 80 Get webpage.... it returns all the html. How i can do it? im on linux enviroment? with C (NOT C++). BTW im .net programmer :/

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >