Search Results

Search found 7470 results on 299 pages for 'storage engines'.

Page 264/299 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Trying to find a PHP5 API-based embeddable CMS

    - by StrangeElement
    I've been making the rounds for a CMS that I can use as an API, in a sort of "embedded" mode. I mean by this that I don't want the CMS to do any logic or presentation. I want it to be used as an API, which I can then use within an existing site. I don't want to be tied to the architecture of the CMS. A good example of this is NC-CMS (http://www.nconsulting.ca/nc-cms/). All it needs is an include at the top, then wherever editable content is desired it's only a function call with a unique label. It's also perfect in the sense that it allows to differentiate between small strings (like titles, labels) and texts (which require a rich-text editor). It's the only CMS I found that fits this description, but it is a little too light as it does not handle site structure. I need to be able to allow my client to add pages, choosing an existing template for the layout. A minimal back-end is required. Wordpress also fits some requirements in that it handles only content editing and allows freedom for the themes by letting them call the content where and how they want it. But it is article-based and backwards, in that it embeds sites (as themes) within its structure, rather than being embeddable in sites like NC. It's funny how checking out all the CMS out there, almost all of them claim that most CMS are not self-sufficient, that they do not handle application logic, while (almost) every single on I found with only one exception do so. Many are mostly article-based blog engines, which does not fit my need. I would appreciate any CMS that fits the general description.

    Read the article

  • Indexing community images for google image search

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Prepare community images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Prepare your site images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Is it possible to redirect non-HTML files with HTTP? And chaining redirects?

    - by Earlz
    Hello, I have been thinking about a neat way of load balancing and one thing that would be required is to be capable of loading an image on an HTML page from multiple locations without rewriting the URL(on each load) So what I need to be able to do is have one URL which is the "static" URL. Such as http://example.com/myimage.png The image is not actually contained in example.com though. So example.com does a either a 302 or 301 or 307 HTTP response to cause a redirect to 2.example.com. How do browsers handle this with images like in this situation? Also, how do browsers handle multiple redirections for instance if 2.example.com also didn't contain it and it went to 3.example.com ? (Note, I am asking this because I've never seen a 301 redirect on anything but an HTML page) Also, which status code would be best to use. 301 means "moved permanently" which this "move" isn't permanent so I don't want it cached. Should I use 307? Is that supported by search engines and modern browsers?

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • App Crashes Loading View Controllers

    - by golfromeo
    I have the following code in my AppDelegate.h file: @class mainViewController; @class AboutViewController; @interface iSearchAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; mainViewController *viewController; AboutViewController *aboutController; UINavigationController *nav; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet mainViewController *viewController; @property (nonatomic, retain) IBOutlet AboutViewController *aboutController; @property (nonatomic, retain) IBOutlet UINavigationController *nav; [...IBActions declared here...] @end Then, in my .m file: @implementation iSearchAppDelegate @synthesize window; @synthesize viewController, aboutController, settingsData, nav, engines; (void)applicationDidFinishLaunching:(UIApplication *)application { [window addSubview:nav.view]; [window addSubview:aboutController.view]; [window addSubview:viewController.view]; [window makeKeyAndVisible]; } -(IBAction)switchToHome{ [window bringSubviewToFront:viewController.view]; } -(IBAction)switchToSettings{ [window bringSubviewToFront:nav.view]; } -(IBAction)switchToAbout{ [window bringSubviewToFront:aboutController.view]; } - (void)dealloc { [viewController release]; [aboutController release]; [nav release]; [window release]; [super dealloc]; } @end Somehow, when I run the app, the main view presents itself fine... however, when I try to execute the actions to switch views, the app crashes with an EXC_BAD_ACCESS. Thanks for any help in advance.

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • changing asp .net web service namespace when already in productive state

    - by eloQ
    by some unfortunate events we published a web service with the tempuri-namespace a couple of months ago and no one did notice (not in our company, not the companies connecting to the web service) even though it's live since that time and lots of companies (~25 i guess) already access this web service. now i'm thinking of correcting that mistake and have the namespace fixed to a proper value. the only problem is: as soon as we would do it, all the programs and services connected to this web service would stop working. i can't really allow that to happen. is there any way to fix the namespace for future purpose or to have the web service operate under two namespaces as one web service? any tip at all what i could do to get rid of the tempuri-namespace and have it fixed without needing to synchronize the change with all the external companies? i'm all out of ideas, so any help would be much appreciated! thanks! I hope this question is alright here, first time user, found this through using search engines on my research and found tempuri.org related posts, just not the right one..

    Read the article

  • What is the purpose of the Html "no-js" class?

    - by Swader
    I notice that in a lot of template engines, in the HTML5 Boilerplate, in various frameworks and in plain php sites there is the no-js class added onto the html element. Why is this done? Is there some sort of default browser behavior that reacts to this class? Why include it always? Does that not render the class itself obsolete, if there is no no-"no-js" case and html can be addressed directly? Here is an example from the HTML5 Boilerplate index.html: <!--[if lt IE 7 ]> <html lang="en" class="no-js ie6"> <![endif]--> <!--[if IE 7 ]> <html lang="en" class="no-js ie7"> <![endif]--> <!--[if IE 8 ]> <html lang="en" class="no-js ie8"> <![endif]--> <!--[if IE 9 ]> <html lang="en" class="no-js ie9"> <![endif]--> <!--[if (gt IE 9)|!(IE)]><!--> <html lang="en" class="no-js"> <!--<![endif]--> As you can see, the html element will always have this class. Can someone explain why this is done so often?

    Read the article

  • I'm graduating with a Computer Science degree but I don't feel like I know how to program.

    - by Wendy Peters
    I'm graduating with a Computer Science degree but I see websites like Stackoverflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I worked as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of what's happening underneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am working through my book it will a decade will have passed before I will finish. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are not so desirable in the local market, where C#, Java, and web development are much higher in demand. My GPA is ok (3.0) but it's not high enough to apply to the large companies or return for graduate studies and I don't have a good network of friends. Basically I'm graduating with a Computer Science degree but I don't feel like I know how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do?

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • Square Brackets in Python Regular Expressions (re.sub)

    - by user1479984
    I'm migrating wiki pages from the FlexWiki engine to the FOSwiki engine using Python regular expressions to handle the differences between the two engines' markup languages. The FlexWiki markup and the FOSwiki markup, for reference. Most of the conversion works very well, except when I try to convert the renamed links. Both wikis support renamed links in their markup. For example, Flexwiki uses: "Link To Wikipedia":[http://www.wikipedia.org/] FOSwiki uses: [[http://www.wikipedia.org/][Link To Wikipedia]] both of which produce something that looks like I'm using the regular expression renameLink = re.compile ("\"(?P<linkName>[^\"]+)\":\[(?P<linkTarget>[^\[\]]+)\]") to parse out the link elements from the FlexWiki markup, which after running through something like "Link Name":[LinkTarget] is reliably producing groups <linkName> = Link Name <linkTarget = LinkTarget My issue occurs when I try to use re.sub to insert the parsed content into the FOSwiki markup. My experience with regular expressions isn't anything to write home about, but I'm under the impression that, given the groups <linkName> = Link Name <linkTarget = LinkTarget a line like line = renameLink.sub ( "[[\g<linkTarget>][\g<linkName>]]" , line ) should produce [[LinkTarget][Link Name]] However, in the output to the text files I'm getting [[LinkTarget [[Link Name]] which breaks the renamed links. After a little bit of fiddling I managed a workaround, where line = renameLink.sub ( "[[\g<linkTarget>][ [\g<linkName>]]" , line ) produces [[LinkTarget][ [[Link Name]] which, when displayed in FOSwiki looks like <[[Link Name> <--- Which WORKS, but isn't very pretty. I've also tried line = renameLink.sub ( "[[\g<linkTarget>]" + "[\g<linkName>]]" , line ) which is producing [[linkTarget [[linkName]] There are probably thousands of instances of these renamed links in the pages I'm trying to convert, so fixing it by hand isn't any good. For the record I've run the script under Python 2.5.4 and Python 2.7.3, and gotten the same results. Am I missing something really obvious with the syntax? Or is there an easy workaround?

    Read the article

  • Node.js/Express Partials problem: Can't be nested too deep?

    - by heorling
    I'm learning Node.js, Express, haml.js and liking it. I've run into a prety annoying problem though. I'm pretty new to this but have been getting nice results so far. I'm writing a jquery heavy web app that relies on a table containing divs. The divs slide around, switch back and fourth and are resized etc to my hearts content. What I'm looking for a way to switch (template?) the divs. Since I've been building in express and mimicking the chat example it would make sense to use partials. The rub is that I've been using inexplicit divs in haml, held within a td. The divs are cunstructed as follows: %tr %td .class1.class2.class3.classetc Which has worked fine cross browser. Parsing the classes works great for the js code to pass arguments around, fetch values etc. What I'd like to be able to do is something like: %tr %td .class1.class2.class3.classetc %ul#messages != this.partial('message.html.haml', { collection: messages }) Any combination I've tried with this has failed however. And I might have tried them all. If I could put a partial into that div I'd probably be set. And you can nest them as long as you use #ids instead of .classes. But if you use more than one class it breaks! I think that's the most accurate way of summing it up. How do you do this? I've checked out various templating solutions like mu.js and micro template like by John Resig. I earlier checked out this thread on templating engines. It's very possible I'm making some fundamental mistake here, I'm new to this. What's a good way to do this?

    Read the article

  • Apache friendly urls

    - by HaukurHaf
    Hi everyone. I've got a small CMS system written in PHP and running on Apache. The format of the URLs this CMS system uses/generates is: /display.php?PageID=xxx where xxx is just some integer number. As you can see, those URLs are not very friendly, neither for users nor search engines. I believe that using mod_rewrite (or something like that) and .htaccess files I should be able to configure Apache for URL-rewriting. I have searched for information about this before but I did not find any easy method to do this, it always involved messing with regular expressions, which I'm not very familiar with. Since the website in question is really simple and small, just 5-10 different pages, I would really like to be able to just hard-code the configuration, without any special rules or regexps. I'd just like to map a friendly URL to an actual URL, perhaps like this: /about = /display.php?PageID=44 /products = /display.php?PageID=34 etc. Is it possible to configure the mod_rewrite plugin in a basic way like this? Could someone explain the easiest method to do this? Explain it to me as if I was a child :-) Thanks in advance!

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • Sphinx - delimiters

    - by yoda
    Hi, I would like to know if the Sphinx engine works with any delimiters (like commas and periods in normal MySQL). My question comes from the urge, not to use them at all, but to escape them or at least thay they don't enter in conflict when performing MATCH operations with FULLTEXT searches, since I have problems dealing with them in MySQL by default and I would prefer not to be forced to replace those delimiters by any other characters to provide a good set of results. Sorry if I'm saying something stupid, but I don't have experience with Sphinx or other complementary (?) search engines. To give you an example, if I perform a search with "Passat 2.0 TDI" MySQL by default would identify the period in this case as a delimiter and since the "2" and "0" are too short to be considered words by default, the results would be a bit messed up. Is it easy to handle with Sphinx (or other search engine)? I'm open to suggestions. This is for a large project, with probably more than 500.000 possible records (not trivial at all). Cheers!

    Read the article

  • Sharing code between two or more rails apps... alternatives to git submodules?

    - by jtgameover
    We have two separate rails_app, foo/ and bar/ (separate for good reason). They both depend on some models, etc. in a common/ folder, currently parallel to foo and bar. Our current svn setup uses svn:externals to share common/. This weekend we wanted to try out git. After much research, it appears that the "kosher" way to solve this is using git submodule. We got that working after separating foo,bar,common into separate repositories, but then realized all the strings attached: Always commit the submodule before committing the parent. Always push the submodule before pushing the parent. Make sure that the submodule's HEAD points to a branch before committing to it. (If you're a bash user, I recommend using git-completion to put the current branch name in your prompt.) Always run 'git submodule update' after switching branches or pulling changes. All these gotchas complicate things further than add,commit,push. We're looking for simpler ways to share common in git. This guy seems to have success using the git subtree extension, but that deviates from standard gitand still doesn't look that simple. Is this the best we can do given our project structure? I don't know enough about rails plugins/engines, but that seems like a possible RoR-ish way to share libraries. Thanks in advance.

    Read the article

  • PHP and storing stats

    - by John
    Using PHP5 and the latest version of MySQL I want to be able to track impressions and clicks for business listings. My question is if I did this myself what would be the best method in storing it so I can run reports? Before I just had a table that had the listing id, user ip address and if it was a click or impression as well as the date it was tracked. However the database itself is approaching 2GB of data and its very slow, part of the problem is its a pretty simple script that includes impressions and clicks from anyone including search engines and basically anyone or anything that accesses the listing page. Is there an api or file out there that has an update to date list that can detect if the person viewing is a actually person and not a spider so I dont fill up the database with unneeded stats? Just looking for suggestions, do I just have a raw database that gets just the hits then a cron job at night tally up for the day for each listing for each ip and store the cumulative stats in a different table? Also what type of database should it be? Innodb? MyISAM?

    Read the article

  • Creating and parsing huge strings with javascript?

    - by user246114
    Hi, I have a simple piece of data that I'm storing on a server, as a plain string. It is kind of ridiculous, but it looks like this: name|date|grade|description|name|date|grade|description|repeat for a long time this string can be up to 1.4mb in size. The idea is that it's a bunch of student records, just strung together with a simple pipe delimeter. It's a very poor serialization method. Once this massive string is pushed to the client, it is split along the pipes into student records again, using javascript. I've been timing how long it takes to create, and split, these strings on the client side. The times are actually quite good, the slowest run I've seen on a few different machines is 0.2 seconds for 10,000 'student records', which has a final string size of ~1.4mb. I realize this is quite bizarre, just wondering if there are any inherent problems with creating and splitting such large strings using javascript? I don't know how different browsers implement their javascript engines. I've tried this on the 'major' browsers, but don't know how this would perform on earlier versions of each. Yeah looking for any comments on this, this is more for fun than anything else! Thanks

    Read the article

  • Why does local variable names take precedence over function names in JavaScripts?

    - by fredrik
    In JavaScript you can define function in a bunch of different ways: function BatmanController () { } var BatmanController = function () { } // If you want to be EVIL eval("function BatmanController () {}"); // If you are fancy (function () { function BatmanController () { } }()); By accident I ran across a unexpected behaviour today. When declaring a local variable (in the fancy way) with the same name as function the local variable takes presence inside the local scope. For example: (function () { "use strict"; function BatmanController () { } console.log(typeof BatmanController); // outputs "function" var RobinController = function () { } console.log(typeof RobinController); // outputs "function" var JokerController = 1; function JokerController () { } console.log(typeof JokerController); // outputs "number", Ehm what? }()); Anyone know why var JokerController isn't overwritten by function JokerController? I tested this in Chrome, Safari, Canary, Firefox. I would guess it's due to some "look ahead" JavaScript optimizing done in the V8 and JägerMonkey engines. But is there any technical explanation to explain this behaviour?

    Read the article

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • How can I create a rules engine without using eval() or exec()?

    - by Angela
    I have a simple rules/conditions table in my database which is used to generate alerts for one of our systems. I want to create a rules engine or a domain specific language. A simple rule stored in this table would be..(omitting the relationships here) if temp > 40 send email Please note there would be many more such rules. A script runs once daily to evaluate these rules and perform the necessary actions. At the beginning, there was only one rule, so we had the script in place to only support that rule. However we now need to make it more scalable to support different conditions/rules. I have looked into rules engines , but I hope to achieve this in some simple pythonic way. At the moment, I have only come up with eval/exec and I know that is not the most recommended approach. So, what would be the best way to accomplish this?? ( The rules are stored as data in database so each object like "temperature", condition like "/=..etc" , value like "40,50..etc" and action like "email, sms, etc.." are stored in the database, i retrieve this to form the condition...if temp 50 send email, that was my idea to then use exec or eval on them to make it live code..but not sure if this is the right approach )

    Read the article

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >