Search Results

Search found 2112 results on 85 pages for 'engines'.

Page 74/85 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • search engine (solr/sphinx) question

    - by noname
    i want to make my threads content searchable with full text search engines like solr. but i wonder one thing. should i index just the thread.title, thread.body and post.body or should i index username, created date, nr of posts, views, country, region and city too that belongs to thread? i mean when an user search for a thread he will get hits returned containing thread title, 2 lines of body, which user has posted it, creation date, tags, and so on. should i index all this information too? but then it would be pretty much the whole database. or should i just index the 3 first columns i mentioned for full text search. and another question. when an user post a new thread, then i have to immidiately tell solr to add that row? if im not, how would it be searchable?

    Read the article

  • Force php through the .net engine in iis7

    - by Rippo
    I have converted a php to asp.net mvc and have it hosted with the Rackspace cloud. All works great apart from some php links are still linked from other sites and within search engines. My question is what do I need to add to my web.config to force php sites to go through the .net engine? These links work as expected as I can catch the 404 and redirect where need be:- http://www.securahome.net/myjunk.info http://www.securahome.net/myjunk.phpp However this one doesn't:- http://www.securahome.net/myjunk.php I have spoken to Rackspace cloud and they say "its not possible as IIS doesn't recognize php files. You can setup mime types to handle them" This however makes no sense and I think they did not understand the problem. Does anyone have a solution?

    Read the article

  • C# WebClient - View source question

    - by Jim
    I'm using a C# WebClient to post login details to a page and read the all the results. The page I am trying to load includes flash (which, in the browser, translates into HTML). I'm guessing it's flash to avoid being picked up by search engines??? The flash I am interested in is just text (not an image/video) etc and when I "View Selection Source" in firefox I do actually see the text, within HTML, that I want to see. (Interestingly when I view the source for the whole page I do not see the text, within HTML, that I want to see. Could this be related?) Currently after I have posted my login details, and loaded the HTML back, I see the page which does NOT show the flash HTML (as if I had viewed source for the whole page). Thanks in advance, Jim PS: I should point out that the POST is actually working, my log in is successful.

    Read the article

  • C++: Platform independent game lib?

    - by Martijn Courteaux
    Hi, I want to write a serious 2D game, and it would be nice if I have a version for Linux and one for Windows (and eventually OSX). Java is fantastic because it is platform independent. But Java is too slow to write a serious game. So, I thought to write it in C++. But C++ isn't very cross-platform friendly. I can find game libraries for Windows and libraries for Linux, but I'm searching one that I can use for both, by recompiling the source on a Windows platform and on a Linux platform. Are there engines for this or is this idea irrelevant? Isn't it that easy (recompiling)? Any advice and information about C++ libraries would be very very very appreciated!

    Read the article

  • What's the requests/second standard for scraping websites?

    - by feydr
    This was the closest question to my question and it wasn't really answered very well imo: http://stackoverflow.com/questions/2022030/web-scraping-etiquette I'm looking for the answer to #1: How many requests/second should you be doing to scrape? Right now I pull from a queue of links. Every site that gets scraped has it's own thread and sleeps for 1 second in between requests. I ask for gzip compression to save bandwidth. Are there standards for this? Surely all the big search engines have some set of guidelines they follow in regards to this.

    Read the article

  • Are closures in javascript recompiled

    - by Discodancer
    Let's say we have this code (forget about prototypes for a moment): function A(){ var foo = 1; this.method = function(){ return foo; } } var a = new A(); is the inner function recompiled each time the function A is run? Or is it better (and why) to do it like this: function method = function(){ return this.foo; } function A(){ this.foo = 1; this.method = method; } var a = new A(); Or are the javascript engines smart enough not to create a new 'method' function every time? Specifically Google's v8 and node.js. Also, any general recommendations on when to use which technique are welcome. In my specific example, it really suits me to use the first example, but I know thath the outer function will be instantiated many times.

    Read the article

  • Nokogiri Not Parsing File

    - by Jesse J
    I'm using Nokogiri to parse pepXML files from different peptide search engines. I have two pepXML files, both of which appear, inasmuch as I can tell, to be of correct format, and puts Nokogiri::XML(IO.read(file)) will output the whole XML file for both files. The problem is, doc.xpath("any valid xpath") will parse the tag from one of the files, but not the other. No errors are given, so I have no idea why it won't parse. Anyone know of any reasons why Nokogiri wouldn't parse something out?

    Read the article

  • What is the point of heightmaps?

    - by Jake Petroules
    I've been pondering this question awhile now... many 3d engines support advanced terrain rendering using quadtrees, LOD... all the features you expect. But every engine I've seen loads height data from heightmaps... grayscale bitmaps. I just can't understand how this is useful - each point in a heightmap can have one of 256 values. But what if you wanted to model Mt. Everest? with detail of 1 meter, or even greater? That's far outside the range of 256. Of course I understand that you can implement your own terrain format to achieve this, but I just can't see why heightmaps are so widely used despite their great limitations.

    Read the article

  • Templating Engine to Generate Simple Reports in .NET

    - by dr. evil
    I'm looking for a free templating engine to generate simple reports. I want some basic features such as : Ability to Write Loops (with any IEnumerable) Passing Variables Passing Templates Files (main template, footer, header) I'll use this to generate reports in HTML and XML. I'm not looking for a ASP.NET Template Engine. This is for a WinForms applications. I've seen this question http://stackoverflow.com/questions/340095/can-you-recommend-a-net-template-engine, however all of those template engines are total overkill for me and focused for ASP.NET. Please only recommend free libraries. // I'm still looking an NVelocity but it doesn't look any promising for .NET, overly complicated, when you download it's bunch of files not clear what to do, no tutorial, startup document etc.

    Read the article

  • Level editor for 3D games with open format or API?

    - by furtelwart
    I would like to experiment with machine generated levels for a 3D game. I'm very open which game this will be. I just like the idea to run through a generated map. For this approach, it would be great if I can use an API or an open format for level designs. Is there an open source level system that can be used in several game engines (ego shooter or whatever)? I don't know if I explained my point clearly, so please add a comment with your question. I will try to clearify my point.

    Read the article

  • Using GET instead of POST to delete data behind authenticated pages

    - by Matt Spradley
    I know you should use POST whenever data will be modified on a public website. There are several reasons including the fact that search engines will follow all the links and modify the data. My question is do you think it is OK to use GET behind authenticated pages in something like an admin interface? One example would be a list of products with a delete link on each row. Since the only way to get to the page is if you are logged in, is there any harm in just using a link with the product ID in the query string?

    Read the article

  • silverlight vs ASP.NET MVC

    - by magellings
    I'm debating whether to use Silverlight 2.0 vs ASP.NET MVC for a web application. The web application will be a subscription free service marketing all age groups. It's important the source is highly testable, but also with the Web 2.0 movement a graphical web application is important as well for competitive reasons. I'm assuming silverlight is better than the ajax helpers/MVC graphically, but foundation-wise testing is better/easier with MVC. Possibly an MVP pattern with Silverlight could increase the testability of the source. Could anyone elaborate on the pros/cons of each technology and recommend one or the other based on the above? (addition 9/22/08) In regards to allowing search engines to index the site, using either technology it will utilize a backend database whereas a lot of the content will be dynamically generated. Based on some of the comments, when we talk of the searchable content would the home page of the application if written in silverlight be searchable? Would I be able to get the site to appear in a google search?

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Environment

    - by Johannes
    I want to program a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both engines will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • How can one cache bust files referenced in a LESS file when using Symfony2, Twig, and Assetic?

    - by user3719083
    I have a web site built on Symfony2 which uses twig templates, LESS, and assetic. In order to cache bust assets, I'm simply using this in my config.yml: framework: templating: engines: ['twig'] assets_version: 'asset-version-here' And then I use the asset() function to load the asset and the cache busting is handled for me. However, the concern I have is when I load my LESS (css) file, there are references to other files, and I would like to know how these files can be cache busted as well. Example: .someSelector { background:url('../images/filename.png'); } How can I make sure that the referenced file, filename.png is cache busted upon deployment? The asset files referenced in Twig using asset() are cache busted automatically upon deployment (I use a deployment script hook that updates the assets_version in the framework's config), but those referenced in a stylesheet are not. How can I do this?

    Read the article

  • which layout engine for finding coordinates of html elements on the web page?

    - by Mexx
    I am doing some web data classification task and was thinking if I could get the co-ordinates of html elements as they would appear on a web-browser without taking into consideration any css or javascript being referred in the web page. My language of programming is c++ and the need results for a couple million of pages, so it has to be fast. I know there is a Microsoft COM component which renders the page in a web browser control and then can be queried for position of different html tags. But this is not suitable in my case as it first renders the whole page which takes up a lot of time. So as I found out, there are open-source layout engines WebKit, Gecko that can probably be used for this. But that's a huge piece of code and I need someone to direct me to the right classes or right modules to look into or any previous/similar work someone has done previously. Also, please let me know what you guys think is a good choice if I want to customize the existing code for use with multiple threads to make it faster. Thanks

    Read the article

  • SQL query: how to translate IN() into a JOIN?

    - by tangens
    I have a lot of SQL queries like this: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o WHERE o.Id IN ( SELECT DISTINCT Id FROM table1, table2, table3 WHERE ... ) These queries have to run on different database engines (MySql, Oracle, DB2, MS-Sql, Hypersonic), so I can only use common SQL syntax. Here I read, that with MySql the IN statement isn't optimized and it's really slow, so I want to switch this into a JOIN. I tried: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o, table2, table3 WHERE ... But this does not take into account the DISTINCT keyword. Question: How do I get rid of the duplicate rows using the JOIN approach?

    Read the article

  • Displaying "broken" sprites?

    - by Roman
    I'm quite new to the world of 2D-Engines. I figured out how to load images and display those as sprites and stuff, but theres one question that bugs me. For instance, when a "rocket" hits an object it will deal damage to it and leave a crater behind. I'd like to have the crater shown on that object. That would require "skipping" some of the pixels of that image when rendering, doesn't it? My question is, how would you do such a thing? What data strcture would you use to save this? How to display a "broken" sprite?

    Read the article

  • Drupal : how to emulate the public/private attribute available in WordPress

    - by Parneix
    Hi, Basically, I'm looking for an easy way (module) to add a private/public option to any kind of content I may published in Drupal (blog entry, image, etc.). So that when I'm logged in, I can see everything. But when an anonymous user visit the site, he will only see the public stuff. It's way to manage a kind of front window/back-store architecture. I can use the same Drupal installation to all my needs and choose to filter the stuff I may want to make publicly available. * Important : 1) Private items must not be accessible even if anonymous users guess its URL; 2) Private item must not show up if anonymous user perform a search; 3) Private content must not be indexed by search engines; 4) Private items should show up if I perform a search while being logged in. Any idea? Thanks a lot, P.

    Read the article

  • Hiding text under an image for SEO purposes

    - by JCHASE11
    I am going to use a large image on my page with a bunch of text on it. The reason I am including body text on an image is because the text will be rotated, and there currently isn't any GREAT solution for rotating text seamlessly across all browsers. On this image, there is a lot of text, and I want it to be indexed by search engines. (but its a picture so it's content won't be indexed, obviously) If I was to include a div with all the text html and set the css to display:none, would Google still index the content that is hidden under the picture? Are their any other solid solutions here?

    Read the article

  • Google App Engine - SiteMap Creation for a social network

    - by spidee
    Hi all. I am creating a social tool - I want to allow search engines to pick up "public" user profiles - like twitter and face-book. I have seen all the protocol info at http://www.sitemaps.org and i understand this and how to build such a file - along with an index if i exceed the 50K limit. Where i am struggling is the concept of how i make this run. The site map for my general site pages is simple i can use a tool to create the file - or a script - host the file - submit the file and done. What i then need is a script that will create the site-maps of user profiles. I assume this would be something like: <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.socialsite.com/profile/spidee</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> <url> <loc>http://www.socialsite.com/profile/webbsterisback</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> </urlset> Ive added some ??? as i don't know how i should set these settings for my profiles based on the following:- When a new profile is created it must be added to a site-map. If the profile is changed or if "certain" properties are changed - then i don't know if i update the entry in the map - or do something else? (updating would be a nightmare!) Some users may change their profile. In terms of relevance to the search engine the only way a google or yahoo search will find the users (for my requirement) profile would be for example by means of [user name] and [location] so once the entry for the profile has been added to the map file the only reason to have the search-bot re-index the profile would be if the user changed their user-name - which they cant. or their location - and or set their settings so that their profile would be "hidden" from search engines. I assume my map creation will need to be dynamic. From what i have said above i would imagine that creating a new profile and possible editing certain properties could mark it as needing adding/updating in the sitemap. Assuming i will have millions of profiles added/being edited how can i manage this in a sensible manner. i know i need a script that can append urls as each profile is created i know the script will prob be a TASK - running at a set freq - perhaps the profiles have a property like "indexed" and the TASK sets them to "true" when the profiles are added to the map. I dont see the best way to store the map - do i store it in the datastore i.e; model=sitemaps properties key_name=sitemap_xml_1 (and for my map sitemap_index_xml) mapxml=blobstore (the raw xml map or ror map) full=boolean (set true when url count is 50) # might need this as a shard will tell us To make this work my thoughts are m cache the current site map structure as "sitemap_xml" keep a shard of url count when my task executes 1. build the xml structure for say the first 100 urls marked "index==false" (how many could u run at a time?) 2. test if the current mcache sitemap is full (shardcounter+10050K) 3.a if the map is near full create a new map entry in models "sitemap_xml_2" - update the map_index file (also stored in my model as "sitemap_index" start a new shard - or reset.2 3.b if the map is not full grab it from mcache 4.append the 100 url xml structure 5.save / m cache the map I can now add a handler using a url map/route like /sitemaps/* Get my * as map name and serve the maps from the blobstore/mache on the fly. Now my question is does this work - is this the right way or a good way to start? Will this handle the situation of making sure the search bots update when a user changes their profile - possibly by setting the change freq correctly? - Do i need a more advance system :( ? or have i re-invented the wheel! I hope this is all clear and make some form of sense :-)

    Read the article

  • What are good resources for computer graphics basics?

    - by Hanno Fietz
    During Flex programming, I recently ran into several questions (about box models, ways to join lines and misaligning pixels [on doctype]) regarding computer graphics and layout, where I felt that I lacked some basic background on things like concepts like the box model approaches mapping real numbers to a pixel raster (like font anti-aliasing) conventions found across drawing engines, like do you count y coordinates from top or bottom, and why I feel that reading some basic Wikipedia articles, books or tutorials on these subjects might help in phrasing my questions more specifically and debugging my code more systematically. I have repeatedly found myself writing tiny test apps in Flex, just to find out how the APIs do very basic stuff. My assumption would be that if I knew the right vocabulary and some general concepts, I could solve these questions much faster.

    Read the article

  • How does the Amazon Recommendation feature work?

    - by Rachel
    What technology goes in behind the screens of Amazon recommendation technology? I believe that Amazon recommendation is currently the best in the market, but how do they provide us with such relevant recommendations? Recently, we have been involved with similar recommendation kind of project, but would surely like to know about the in and outs of the Amazon recommendation technology from a technical standpoint. Any inputs would be highly appreciated. Update: This patent explains how personalized recommendations are done but it is not very technical, and so it would be really nice if some insights could be provided. From the comments of Dave, Affinity Analysis forms the basis for such kind of Recommendation Engines. Also here are some good reads on the Topic Demystifying Market Basket Analysis Market Basket Analysis Affinity Analysis Suggested Reading: Data Mining: Concepts and Technique

    Read the article

  • What is the precision of the priority field in sitemap.xml?

    - by Christoph
    Unfortunately the specification does not tell anything about precision. The xml scheme definition states that it is of the type xsd:decimal: <xsd:restriction base="xsd:decimal"> <xsd:minInclusive value="0.0"/> <xsd:maxInclusive value="1.0"/> </xsd:restriction> I have a sitemap generator that uses up to 10 positions after decimal point. Where often only the last few positions differ. These numbers are perfectly right according to the xsd, but yet i found some pages(3,4) that state that only 0.0, 0.1, 0.2, .., 1.0 are valid values. How will the search engines react to such a sitemap? Will some just round the value? I know that it is unlikely that someone can provide an answer to that question, unless he works for that search engine, but i think experiences will also do.

    Read the article

  • Does lookaround affect which languages can be matched by regular expressions?

    - by sepp2k
    There are some features in modern regex engines which allow you to match languages that couldn't be matched without that feature. For example the following regex using back references matches the language of all strings that consist of a word that repeats itself: (.+)\1. This language is not regular and can't be matched by a regex, which does not use back references. My question: Does lookaround also affect which languages can be matched by a regular expression? I.e. are there any languages that can be matched using lookaround, which couldn't be matched otherwise? If so, is this true for all flavors of lookaround (negative or positive lookahead or lookbehind) or just for some of them?

    Read the article

  • What are the best security measures to take for making certain directories private?

    - by Sattvic
    I have a directory on my server that I do not want Search Engines to crawl and I already set this rule in robots.txt I do want people that have logged in to be able to have access to this directory without having to enter a password or anything. I am thinking that a cookie is the best thing to put on users computers after they login, and if they have a cookie, they can access the directory. Is this possible, or is there a better way? I want people without this cookie to not have access to this directory - access for members only Any suggestions on the best design for this?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >