Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 179/285 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Strategies for avoiding SQL in your Controllers... or how many methods should I have in my Models?

    - by Keith Palmer
    So a situation I run into reasonably often is one where my models start to either: Grow into monsters with tons and tons of methods OR Allow you to pass pieces of SQL to them, so that they are flexible enough to not require a million different methods For example, say we have a "widget" model. We start with some basic methods: get($id) insert($record) update($id, $record) delete($id) getList() // get a list of Widgets That's all fine and dandy, but then we need some reporting: listCreatedBetween($start_date, $end_date) listPurchasedBetween($start_date, $end_date) listOfPending() And then the reporting starts to get complex: listPendingCreatedBetween($start_date, $end_date) listForCustomer($customer_id) listPendingCreatedBetweenForCustomer($customer_id, $start_date, $end_date) You can see where this is growing... eventually we have so many specific query requirements that I either need to implement tons and tons of methods, or some sort of "query" object that I can pass to a single -query(query $query) method... ... or just bite the bullet, and start doing something like this: list = MyModel-query(" start_date X AND end_date < Y AND pending = 1 AND customer_id = Z ") There's a certain appeal to just having one method like that instead of 50 million other more specific methods... but it feels "wrong" sometimes to stuff a pile of what's basically SQL into the controller. Is there a "right" way to handle situations like this? Does it seem acceptable to be stuffing queries like that into a generic -query() method? Are there better strategies?

    Read the article

  • How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad

    - by The Geek
    The iWork apps are some of the best apps on the iPad, and each show just how powerful a touchscreen device can be with the most basic of computing functions. In fact, there’s not much to dislike about the iWork apps, except for one thing: importing and exporting files. You can open documents from email attachments, download them from websites, or import them from other apps like Dropbox. Once you’ve opened your file in Pages, Keynote, or Numbers on iPad, though, you can only send it via email, upload it to a WebDAV server or Apple’s iDisk service, or wait to sync it with iTunes on your computer. Most other iOS office apps don’t offer nearly as many features as the iWork apps, but they do offer deep integration with Dropbox which makes it easy to view and edit your documents no matter where you are. Dropbox is the most popular file sync and sharing solution, and makes it absolutely painless to share folders with anyone around the world and keep your computers in sync. That is, computers and applications that integrate with Dropbox. However, you don’t need to give up on using Dropbox with iWork apps on iPad. Today we’re going to look at how you can enable WebDAV compatibility on your Dropbox account to let Pages integrate nearly the whole way with Dropbox. It’s not a perfect solution, but it’s much better than the default setup. So let’s get started Latest Features How-To Geek ETC How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition Stylebot Customizes Web Pages in Chrome, Now Has Downloadable Styles Blackberry, Dell, Apple, and Motorola Tablets Compared [Infographic] Encrypt Your Google Search Queries Vintage Posters Showcase the History of Tech Advertising Google Cloud Print Extension Lets You Print Doc/PDF/Txt Files from Web Sites Hack a $10 Flashlight into an Ultra-bright Premium One

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Long 'Wait' Time for three php/CSS files. Is something blocking them?

    - by William Pitcher
    I have been speed optimizing a Wordpress site to little effect. There are three files CSS-related php files from the Wordpress theme that are delaying page loads on the site. One of the three files is basically one line of custom CSS from the custom CSS feature in the theme. You can see what I am talking about with this Pingdom speed test: The yellow is 'Wait'. There are no slow items in the cut-off portion of the image. The full results are here: Pingdom Results Page 1. Any thoughts on what might be causing this? I understand that I have blocking CSS or JS files, but I don't see anything that would be causing that long of a wait. When I ran the P3 Plugin Profiler, Wordpress and all plugins appeared fine -- it is the theme that is taking all the time. GTmetrix recommends avoiding dynamic queries. I assume all the ver=3.61 references are to the version of Wordpress (which I am using). I noticed that my Wordpress sites using other themes don't make this query (at least not over and over). 2. Is this typical coding practice? 3. How much negative impact do these query-strings have -- a little or a lot? I tried searching for similar questions here, please excuse me if I missed something. Sometimes, I know just enough to be dangerous.

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • An online version of ClearTrace

    - by Bill Graziano
    When I visit clients for the first time and conduct a performance review I introduce them to ClearTrace. It’s still the best way I know to identify exactly which queries are consuming the most resources.  The downside is that it needs to be downloaded and create a database to store the results.  I finally decided it would be easier if I could just upload a trace immediately. You can find the online version of ClearTrace at TraceTune.com.  It provides a simple way to upload a trace file and see exactly which stored procedures or SQL statements consume the most CPU and disk.   This is still a work in progress as I try to determine exactly which features from ClearTrace are important.  I’ve also limited the file upload to 10MB in this beta release.  That might not sound like much but I get over 20,000 events using this stored procedure to generate the trace. If you’re looking for something to do on a Friday, I’d suggest a little performance tuning.  Generating 10MB of trace data doesn’t take long at all and in a short time you’ll see exactly which SQL statements you need to tune first.

    Read the article

  • Can too many 301 redirects cause a DNS error?

    - by Graham
    For a site http://imageocd.com that I just set up I initially spelled the category "automobiles" as "autimobiles"... I know it's rediculous. I then set up over 10,000 pages behind that category e.g. http://imageocd.com/automobiles/hillman-minx-cabrio-pictures-and-wallpapers. So, I set up over 10,000 301 url redirects to change the spelling on automobiles. I just checked my Google Webmasters report and got an error saying: http://www.imageocd.com/: Googlebot can't access your siteSep 7, 2012 Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 66.7%. Could the overabundance of 301 redirects be causing this? I host 13 sites on this dedicated server and all sites are running fine. I also contacted GoDaddy and they said the server is running fine. Any ideas on what might be going on? Also, I have "canonical" set up for every URL. Could this be part of the error? Thanks.

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

  • What is a good way to refactor a large, terribly written code base by myself? [closed]

    - by AgentKC
    Possible Duplicate: Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • Tuning WebServer Response -

    - by Vedran Wex Maricevic
    I have this sam e question on StackOverflow and I was advised to ask it here hoping for more information. Here is the question: I am in rather unfavorable situation. I have aspdotneststore front e-commerce application and search addon called VibeTrib. I dont have source code for both of those. Store that runs on StoreFront and VibeTrib has close to 250k products. Also we have lots of filters. I spoke to ViTrib reps, and they want extra money so they could optimize Queries that they use. Money they require is nto a big deal, but the problem is I dont trust them anymore. What we got is much different then wha is being advertised. To cut the long story short. I am runing the store on Amazon AWS now, and regardless of what DB (MsSQL 2012) server I set (I tried 32GB RAM monsters instances) it is slow. Ajax search uses Full Text search and it displays search keywords relatively fast, but once the search is performed ( to display all results) it is still slow.!!! There is something that I could to do accelerate the speed on my own end? I do have full control over EC2 Instance (Web server Server 2012 and IIS 8). Can I set IIS to step in for the search and cache some of it? I was hoping to cache at least some most common words. My best bet is IIS 8 :) Is there any help in my case? Thanks

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • How can I get non-programmer colleagues on board with bespoke software rather than Dynamics CRM + Sharepoint?

    - by Bendos
    I am working with a company which designs and builds one-off machines. They have been 'dabbling' with hosted Dynamics CRM and Sharepoint (on different servers!) in an attempt to centralise their data and help colleagues collaborate more effectively across projects. They haven't used either system to their potential. Now we are looking at the engineering department who already use a form of version control software for the various CAD files (Autodesk Vault) however it is becoming increasingly necessary to implement more of a generic file version control system as they use many more files than can be managed in Vault (sometimes just photos or scans of paper documents), hence why they were looking at using Sharepoint. However... as the 'programmer' of the bunch, I can see several scenarios which don't seem to fit well with the Dynamics + Sharepoint approach; simple reports based on cross-table queries, exporting certain metrics as a spreadsheet, defining project hierarchies and many-many relationships, and as such I have been pushing for an in-house developed 'ECM' / 'ERP' software package (perhaps in .NET or php). Some colleagues seem to attach a greater value to the MS software (perhaps becuase it has a logo!) but don't see that it's just a framework, not a solution. Can anyone provide a good example of when custom software would actually be better than using Dynamics + Sharepoint and how do I relate that to non-technical staff?

    Read the article

  • My sound is not working, so I'm going to reinstall FYI [closed]

    - by fer
    I've had trouble getting the sound to work in Ubuntu 12.04 I'm running an Acer Aspire 5739g laptop. This is using a clean install. This wasn't a problem when ubuntu first installed. Rather, it was when I ran the updates that it stopped working. I already tried the suggestions on the ubuntu sites and other similar queries, and they haven't fixed it. Something in the updates is making my sound not work. Edit: It turns out that this might be a bug (the sound issue, first paragraph). After reinstall, it happened again (it's not caused by updates at all, or any software, because I fixed it now w/o reinstall). It seems like I replicated it as follows: I changed auto-hide in the behavior tab of Appearance settings by turning it on, and setting the sensitivity to below the recommended setting. Then instead of restarting, I just logged out and back in. The sound stopped working again. I set the behavior settings to default, restarted, and now it's back to normal. Not sure if it's due to only logging out (and not restarting) or b/c I set my sensitivity to a low setting. Not sure if this helps anyone, but thought I'd mention it.

    Read the article

  • Best CMS for review-type sites

    - by Pru
    Is there an ideal CMS for making a review site? By review site, I mean like a restaurant review site where you have each entry belonging to different major categories like Cuisine and City. Then users can browse and filter by each or by combination (Chinese Food in Los Angeles, with suggestions of other Chinese restaurants in LA, etc). Furthermore, I'd want it to support other fields like price, parking, kid-friendliness, etc. And to have users be able to filter by those criteria. I've been told that with a combination of custom taxonomies, plug-ins and many clever little queries, that Wordpress 3.x can handle this. But I'm having a heck of a time with it getting into the nitty gritty, and that's where I find the community support is lacking. The sort of stuff you'd think would work in WP, like making one parent category for Cuisine and one for City, don't really work once you get further in and start trying to pull it all together. Then you find these blog posts where people say, "This example shows that one could create a huge movie review site using custom taxonomies..." but when you go and try it you hit all sorts of challenges and oddities that point a big long finger at Wordpress being in fact a blogging platform. The best I came up with was one category for the cuisine and one tag for the city, then I created a couple of custom tag-like taxonomies for the other features. It's quite a mess to try to figure out how to assemble all of that into a natural, intuitive site. I expect a few versions down the road WP will be able to do these sorts of sites out of the box. So I thought I'd take a step back before I run back into the Wordpress fray and find out if maybe there is another platform better suited to this sort of relational content site. Directory scripts in some ways offer many of the features I'm looking for, but I need something more flexible and, hopefully, interactive (comments, reviews). I'm especially looking for feedback from people who've crafted sites like this. Thanks!

    Read the article

  • Solutions for "Maintenance Mode"

    - by Ka Lyse
    Given a web application running across 10+ servers, what techniques have you put in place for doing things like altering the state of your website so that you can implement certain features. For instance, you might want to: Restrict Logins/Disable Certain Features Turn Site to "Read Only" Turn Site to Single "Maintenance Mode" page. Doing any of the above is pretty trivial. You can throw a particular "flag" in an .ini file, or add a row/value to a site_options table in your database and just read that value and do the appropriate thing. But these solutions have their problems. Disadvantages/Problems For instance, if you use a file for your application, and you want to switch off a certain feature temporarily, then you need to update this file on all servers. So then you might want to look at running something like ZooKeeper, but you are probably overcomplicating things. So then, you might decide that you want to store these "feature" flags in a database. But then you are obviously adding unncessary queries to each page request. So you think to yourself, that you will throw memcached in to the mix and just cache the query. Then you just retrieve all of your "Features" from memcached and add a 2ms~ latency to your application on every page. So to get around this, you decide to use a two tier-cache system, whereby you use an inmemory cache on each machine, (like Apc/Redis etc). This would work, but then it gets complicated, because you would have to set the key/hash life to perhaps 60 seconds, so that when you purge/invalidate the memcached object storing your "Features" result, your on machine cache is prompt enough to get the the new states. What suggestions might you have? Keeping in mind that optimization/efficiency is the priority here.

    Read the article

  • What would be the fastest way of storing or calculating legal move sets for chess pieces?

    - by ioSamurai
    For example if a move is attempted I could just loop through a list of legal moves and compare the x,y but I have to write logic to calculate those at least every time the piece is moved. Or, I can store in an array [,] and then check if x and y are not 0 then it is a legal move and then I save the data like this [0][1][0][0] etc etc for each row where 1 is a legal move, but I still have to populate it. I wonder what the fastest way to store and read a legal move on a piece object for calculation. I could use matrix math as well I suppose but I don't know. Basically I want to persist the rules for a given piece so I can assign a piece object a little template of all it's possible moves considering it is starting from it's current location, which should be just one table of data. But is it faster to loop or write LINQ queries or store arrays and matrices? public class Move { public int x; public int y; } public class ChessPiece : Move { private List<Move> possibleMoves { get; set; } public bool LegalMove(int x, int y){ foreach(var p in possibleMoves) { if(p.x == x && p.y == y) { return true; } } } } Anyone know?

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Google Rolls Out Secured Search. It’s Slightly Different From Regular Search

    - by Gopinath
    Google rolled out secured version of it’s search engine at https://google.com (did you notice https instead of http?). This search engine lets everyone to use Google search in a secured way. How is it secured? When you use https://google.com, the data exchanged between your browser and Google servers is encrypted to make sure that no one can sniff it. Is my search history secured from Google? No. The search queries you submit to Google are stored in Google servers. There is no change Google’s search history recording. Any differences between Regular Search and Secured Search Results? Yes. Secured search is slightly different from regular search. When you are accessing Google Secured Search Image search options will not be available on the left side bar. Site may respond slow compared to regular search site as there is a overhead to establish between your browser and the server. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >