Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 314/1286 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Searches (and general querying) with HBase and/or Cassandra (best practices?)

    - by alexeypro
    I have User model object with quite few fields (properties, if you wish) in it. Say "firstname", "lastname", "city" and "year-of-birth". Each user also gets "unique id". I want to be able to search by them. How do I do that properly? How to do that at all? My understanding (will work for pretty much any key-value storage -- first goes key, then value) u:123456789 = serialized_json_object ("u" as a simple prefix for user's keys, 123456789 is "unique id"). Now, thinking that I want to be able to search by firstname and lastname, I can save in: f:Steve = u:384734807,u:2398248764,u:23276263 f:Alex = u:12324355,u:121324334 so key is "f" - which is prefix for firstnames, and "Steve" is actual firstname. For "u:Steve" we save as value all user id's who are "Steve's". That makes every search very-very easy. Querying by few fields (properties) -- say by firstname (i.e. "Steve") and lastname (i.e. "l:Anything") is still easy - first get list of user ids from "f:Steve", then list from "l:Anything", find crossing user ids, an here you go. Problems (and there are quite a few): Saving, updating, deleting user is a pain. It has to be atomic and consistent operation. Also, if we have size of value limited to some value - then we are in (potential) trouble. And really not of an answer here. Only zipping the list of user ids? Not too cool, though. What id we want to add new field to search by. Eventually. Say by "city". We certainly can do the same way "c:Los Angeles" = ..., "c:Chicago" = ..., but if we didn't foresee all those "search choices" from the very beginning, then we will have to be able to create some night job or something to go by all existing User records and update those "c:CITY" for them... Quite a big job! Problems with locking. User "u:123" updates his name "Alex", and user "u:456" updates his name "Alex". They both have to update "f:Alex" with their id's. That means either we get into overwriting problem, or one update will wait for another (and imaging if there are many of them?!). What's the best way of doing that? Keeping in mind that I want to search by many fields? P.S. Please, the question is about HBase/Cassandra/NoSQL/Key-Value storages. Please please - no advices to use MySQL and "read about" SELECTs; and worry about scaling problems "later". There is a reason why I asked MY question exactly the way I did. :-)

    Read the article

  • How do I make an HTML text box show a hint when empty?

    - by Michael L Perry
    I want the search box on my web page to display the word "Search" in gray italics. When the box receives focus, it should look just like an empty text box. If there is already text in it, it should display the text normally (black, non-italics). This will help me avoid clutter by removing the label. BTW, this is an on-page AJAX search, so it has no button.

    Read the article

  • Returning unique values of a multi-dimensional array with CodeIgniter PHP

    - by Michael Bradley
    Hi - I'm developing a property rentals website. The search results page will contain a list of property results. It is my intention to redefine the results, say by town, country, property type etc. So let's say for example the user searches 'France'. All of the relative properties will be returned and displayed in a list. However, I also need to reuse this array, to display only unique town names from the search results array. e.g. Montpellier, Lyon, Rennes, Nice etc. The idea is when use user click on 'Nice', only the 'Nice' properties would return. I would also like to display how many properties are in that town. The closest example as to what I want to achieve. http://www.miaandmaggie.com/dog-collars-leashes.html Any ideas how I can use my search array to display the unique towns of the search? Many thanks! M

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [demas@arch.local.net][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [demas@arch.local.net][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [demas@arch.local.net][~/dev/study/java/mongodb]% javac test1.java [demas@arch.local.net][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • Git tutorial: Understanding git pull and branches (using a specific example repo)

    - by dreftymac
    Backround: Suppose I have the following Git URLs (hosted on github) http://github.com/mikl/drupal.git git://github.com/mikl/drupal.git (Git read-only) I am interested in having a local copy of this repository so I can pratice working with branches in git and see how my local working tree can change depending on which branch I am working with. Questions: To get started, I set up a local directory and do git clone git://github.com/mikl/drupal.git ... Will this clone all of the branches? Or will it only clone master? The web front-end for github gives me a "drop down" menu that allows me to switch branches ... Does changing this drop-down actually change which branch I will be grabbing when I run git clone? If I want a new copy of this repository on my local machine, but I am interested in only two branches of this repository and I want to ignore all the rest, what command do I use to ensure I clone only those two branches and nothing else (assume one of the branches is master)?

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • Does there exist an "idea checkout system" on the Internet?

    - by TimeSpace Traveller
    Greetings. I would like to ask the following question: is there anything on the Internet like an "idea checkout" system? Situation: I'm a software developer. Since my current job has started 2 years ago, my mentor at that time has pointed me to the open source world. I have only put little time to look at some of the open source projects, let alone any contribution. However, it is my wish to start developing something outside of the work. Well, except a little problem. I don't know what to develop! It is not about the technical knowledge; the problem is that, I am not a creative person. I am very good at analytical thinking, as well as debugging skills. When being told by my work partners to develop a solution, I could get it done without a problem. However, outside of work, I have no idea what to develop. When I look at the Internet, it seems that so many people have already been developing on so many interesting stuff, making me wonder what I could develop, so that I would not reinvent something already existed. That starts to make me wonder. On the Internet, is there anything like an "idea checkout" system or society? For example, some people would throw in as a software idea, and the system would keep it as an "inventory"; later, a potential software developer would "check out" the idea, just a how people would check out a book from the library. Then, the developer would check the "idea" back in, with a certain kind of work-in-progress or developed software, thus becoming an open-source project. I have just noticed that here at stackoverflow, there is a "Project-Ideas" tag, so perhaps that can provide me some ideas on what to develop; still, my wonder is about a system that people would provide ideas, and people would check out ideas to develop / implement into actual solution. Is there such a system or society existing anywhere on the Internet? Any input is welcome! Thank you very much. Update: Thank you for everyone who has answered my question. Certainly, "getting idea" is part of my problem; as a software developer, however, I'm concerned more than just "getting idea". What I am concerned more, as I have commented, is about the existence of such an idea exchanging ecosystem, capable to initiate open-source projects. I'll put an example here. Say, person A has an idea of music search program, but not search by the attributes of the music (composer, singer, publisher, lyrics, etc.); instead, he wants a program (and a database) to search a piece of music by melody. Very often, people only remember a piece of music by its melody, not even the name of the music (e.g. the music he wants was only once heard in a bookstore, but the melody just gets stuck in his head!). In order to find that piece, normally he would just need to blindly search for it, and spent a long time to do so. A search by melody would enable person A to find the piece much quicker. However, he would not want to personally work on it, not just because of the complexity (he is not a musician and/or programmer, knowing almost nothing about music systems in computer, search algorithms, etc.), but also legal issues (RIAA??), thus he would just like to keep the idea at some place, and let other people to work on that. Now, a developer (person B) may be at the same stage as I am right now, wishing to find something to develop, but not having an idea. With the idea exchanging ecosystem, person B will search, and somehow discover person A's music search idea, and feeling interested enough to work on it. So he "checks out" the idea, start working on it (at least a skeleton), and checks back in with the progress. An open-source project starts from here, fulfilling person A's wish, and person B's programming desire. The above is just an example, because there are already such systems exist on the Internet, but it illustrates what I think about the idea exchange system in my mind. My main concern is about idea exchanging ecosystem, not at personal and unorganized level, but at a semi-organized protocol that's specifically for software developers, having actual projects coming out as the fruits. Not about "projects", but about "ideas and product of ideas". Hopefully that would clear up some of the original idea of this question. Any input is welcome; in fact, I would like to hear as many people as possible how everyone thinks about this. Thank you very much!

    Read the article

  • HttpSendRequest not getting latest file from server

    - by Doug Kavendek
    I am having an issue with my HTTP requests in my app, such that if the remote file is the same size as the local file (even though its modified time is different, as its contents have been changed), attempts to download it return quickly and the newer file is not downloaded. In short, the process I am following is: Setting up an HTTP connection with the INTERNET_FLAG_RESYNCHRONIZE flag and calling HttpSendRequest(); then checking the HTTP status code and finding it to be "200". If the remote file is updated, but remains the same size as the local copy: The local file is unchanged after running the app. If I call HttpQueryInfo() with HTTP_QUERY_LAST_MODIFIED after sending the request, it gives me the actual last modified time of the server's file, which I can see is different from the local file I am trying to have it overwrite. If the remote file is updated, and the file size becomes different from the local copy: It is downloaded and overwrites the local copy as expected. Here's a fairly abridged version of the code, to cut out helpers and error checking: // szAppName = our app name HINTERNET hInternetHandle = InternetOpen( szAppName, INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0 ); // szServerName = our server name hInternetHandle = InternetConnect( hInternetHandle, szServerName, INTERNET_DEFAULT_HTTP_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, 0 ); // szPath = the file to download LPCSTR aszDefault[2] = { "*/*", NULL }; DWORD dwFlags = 0 | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTP | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTPS | INTERNET_FLAG_KEEP_CONNECTION | INTERNET_FLAG_NO_AUTH | INTERNET_FLAG_NO_AUTO_REDIRECT | INTERNET_FLAG_NO_COOKIES | INTERNET_FLAG_NO_UI | INTERNET_FLAG_RESYNCHRONIZE; HINTERNET hHandle = HttpOpenRequest( hInternetHandle, "GET", szPath, NULL, NULL, aszDefault, dwFlags, 0 ); DWORD dwTimeOut = 10 * 1000; // In milliseconds InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_RECEIVE_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_SEND_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); DWORD dwRetries = 5; InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_RETRIES, &dwRetries, sizeof( dwRetries ) ); HttpSendRequest( hInternetHandle, NULL, 0, NULL, 0 ); Since I have found I can query the remote file's last modified time, and find it to be accurate, I know it's actually getting to the server. I thought that specifying INTERNET_FLAG_RESYNCHRONIZE would force the file to resynch if it's out of date. Do I have it all wrong? Is this just how it's supposed to work?

    Read the article

  • jQuery Autocomplete & jTemplates - handling response

    - by Diegos Grace
    Has anyone had any experience with using jTemplates to display autocomplete results. I have the following $("#address-search").autocomplete({ source: "/Address/SearchAddress", minLength: 2, delay: 400, focus: function (event, ui) { $('#address-search').val(ui.item.name); return false; }, parse: function(data) { $("#autocomplete-results").setTemplate($("#templateHolder").html()); $("#autocomplete-results").processTemplate(data); }, select: function (event, ui) { $('#address-search').val(ui.item.name); $('#search-address-id').val(ui.item.id); $('#search-description').html(ui.item.address); }); and the simple jtemplate holder: <script type="text/html" id="templateHolder"> <ul class="autocomplete"> {#foreach $T as data} <li>{$T.name}</li> {#/for} </ul> </script> Above i'm using 'Parse' to format results, I've also tried the autocomplete result method but not having any luck so far. The only success I've had is by using the private method ._renderItem and formatting the data that way but we want to render the output using the jTemplate. Any advice appreciated.

    Read the article

  • Problems in php coding

    - by anwar
    Hi there everyone im new to PHP and Joomla and I have developed a component in Joomla but my code is giving me errors. I have tried to solve the problem but I’am unable to solve it. So can anyone suggest me what is the problem with my code? Thanks in advance. Here are my two files: 1st view.html.php defined('_JEXEC') or die('=;)'); jimport('joomla.application.component.view'); class namnamViewlistrestaurant extends JView { function display($tpl = null) { $item = 'item'; RestUser::RestrictDirectAccess(); //-- Custom css JHTML::stylesheet( 'style.css', 'components/com_namnam/assets/css/' ); $cuisine=Lookups::getLookup('cuisine'); $lists['cuisine'] = JHTML::_('select.genericlist', $cuisine, 'idcuisine[]', 'class="inputbox" size="7"', 'value', 'text', $item->idcuisine); $category=Lookups::getLookup('restcategory'); $lists['category'] = JHTML::_('select.genericlist', $category, 'idcategory[]', 'class="inputbox" multiple="multiple" size="7"', 'value', 'text', $item->idcategory); $items = & $this->get('Data'); $pagination =& $this->get('Pagination'); $lists = & $this->get('List'); $this->assignRef('items', $items); $this->assignRef('pagination', $pagination); $this->assignRef('lists', $lists); parent::display($tpl); }//function }//class And 2nd is listrestaurant.php defined('_JEXEC') or die('=;)'); jimport('joomla.application.component.model'); class namnamModellistrestaurant extends JModel { var $_data; var $_total = null; var $_pagination = null; function __construct() { parent::__construct(); global $mainframe, $option; $limit = $mainframe->getUserStateFromRequest( 'global.list.limit', 'limit', $mainframe->getCfg('list_limit'), 'int' ); $limitstart = $mainframe->getUserStateFromRequest( $option.'.limitstart', 'limitstart', 0, 'int' ); $limitstart = ($limit != 0 ? (floor($limitstart / $limit) * $limit) : 0); $this->setState('limit', $limit); $this->setState('limitstart', $limitstart); } function _buildQuery() { $where = array(); $where[]=" idowner=".RestUser::getUserID()." "; if ($this->search) { $where[] = 'LOWER(name) LIKE \''. $this->search. '\''; } $where =( count($where) ) ? ' WHERE ' . implode( ' AND ', $where ) : ''; $orderby = ''; #_ECR_MAT_FILTER_MODEL1_ if (($this->filter_order) && ($this->filter_order_Dir)) { $orderby = ' ORDER BY '. $this->filter_order .' '. $this->filter_order_Dir; } $this->_query = ' SELECT *' . ' FROM #__namnam_restaurants ' . $where . $orderby ; return $this->_query; } function getData() { if (empty($this->_data)) { $query = $this->_buildQuery(); $this->_data = $this->_getList($query, $this->getState('limitstart'), $this->getState('limit')); } return $this->_data; } function getList() { // table ordering $lists['order_Dir'] = $this->filter_order_Dir; $lists['order'] = $this->filter_order; // search filter $lists['search']= $this->search; return $lists; } function getTotal() { // Load the content if it doesn't already exist if (empty($this->_total)) { $query = $this->_buildQuery(); $this->_total = $this->_getListCount($query); } return $this->_total; } function getPagination() { // Load the content if it doesn't already exist if (empty($this->_pagination)) { jimport('joomla.html.pagination'); $this->_pagination = new JPagination($this->getTotal(), $this->getState('limitstart'), $this->getState('limit') ); } return $this->_pagination; } }//class And the errors are: Notice: Trying to get property of non-object in C:\wamp\www\namnam.com\components\com_namnam\views\listrestaurant\view.html.php on line 26 Notice: Trying to get property of non-object in C:\wamp\www\namnam.com\components\com_namnam\views\listrestaurant\view.html.php on line 29 Notice: Undefined property: namnamModellistrestaurant::$search in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 38 Notice: Undefined property: namnamModellistrestaurant::$filter_order in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 48 Notice: Undefined property: namnamModellistrestaurant::$search in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 38 Notice: Undefined property: namnamModellistrestaurant::$filter_order in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 48 Notice: Undefined property: namnamModellistrestaurant::$filter_order_Dir in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 76 Notice: Undefined property: namnamModellistrestaurant::$filter_order in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 77 Notice: Undefined property: namnamModellistrestaurant::$search in C:\wamp\www\namnam.com\components\com_namnam\models\listrestaurant.php on line 80

    Read the article

  • django accepting GET parameters

    - by tipu
    I want the index page to accept parameters, whether it be mysite.com/search/param_here or mysite.com/?search=param_here I have this in my URL patterns but I can't get it to work.. any suggestions? urlpatterns = patterns('', (r'^$/(?P<tag>\w+)', 'twingle.search.views.index'), )

    Read the article

  • removing duplicate strings from a massive array in java efficiently?

    - by Preator Darmatheon
    I'm considering the best possible way to remove duplicates from an (Unsorted) array of strings - the array contains millions or tens of millions of stringz..The array is already prepopulated so the optimization goal is only on removing dups and not preventing dups from initially populating!! I was thinking along the lines of doing a sort and then binary search to get a log(n) search instead of n (linear) search. This would give me nlogn + n searches which althout is better than an unsorted (n^2) search = but this still seems slow. (Was also considering along the lines of hashing but not sure about the throughput) Please help! Looking for an efficient solution that addresses both speed and memory since there are millions of strings involved without using Collections API!

    Read the article

  • Is it possible to use Google Gears inside of another Firefox extension?

    - by Dmitry Nedbaylo
    Basically, i want to implement Offline/Online XUL application with ability to upload data to server. Yes, i know there is Mozilla Storage API, but it looks like it is much more easier with Gears to have local database and to upload local changes to server using WorkerPool. Without Gears, i have no ideas how to upload local changes to remote server. Any thoughts, friends? Thanks in advance for any help.

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • mod_wsgi | linux installation error

    - by MMRUser
    I'm getting the following error when I try to install mod_wsgi ./configure checking for apxs2... no checking for apxs... /usr/sbin/apxs checking Apache version... 2.2.3 configure: creating ./config.status config.status: creating Makefile make /usr/sbin/apxs -c -I/usr/local/include/python2.6 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.6/config -lpython2.6 -lpthread -ldl -lutil -lm /apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.6 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo sh: /apr-1/build/libtool: No such file or directory apxs:Error: Command failed with rc=8323072 . make: *** [mod_wsgi.la] Error 1 libtool is installed on my system.. mod_wsgi 3.2 *Apache 2.2* *Python 2.6*

    Read the article

  • django+mod_wsgi on virtualenv not working

    - by jwesonga
    I've just finished setting up a django app on virtualenv, deployment went smoothly using a fabric script, but now the .wsgi is not working, I've tried every variation on the internet but no luck. My .wsgi file is: import os import sys import django.core.handlers.wsgi # put the Django project on sys.path root_path = os.path.abspath(os.path.dirname(__file__) + '../') sys.path.insert(0, os.path.join(root_path, 'kcdf')) sys.path.insert(0, root_path) os.environ['DJANGO_SETTINGS_MODULE'] = 'kcdf.settings' application = django.core.handlers.wsgi.WSGIHandler() I keep getting the same error: [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] mod_wsgi (pid=16938): Exception occurred processing WSGI script '/home/kcdfweb/webapps/kcdf.web/releases/current/kcdf/apache/kcdf.wsgi'. [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] Traceback (most recent call last): [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 230, in __call__ [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] self.load_middleware() [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 33, in load_middleware [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] for middleware_path in settings.MIDDLEWARE_CLASSES: [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py", line 269, in __getattr__ [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] self._setup() [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 40, in _setup [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] self._wrapped = Settings(settings_module) [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 75, in __init__ [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) [Sun Apr 18 12:44:30 2010] [error] [client 41.215.123.159] ImportError: Could not import settings 'kcdf.settings' (Is it on sys.path? Does it have syntax errors?): No module named kcdf.settings my virtual environment is on /home/user/webapps/kcdfweb my app is /home/user/webapps/kcdf.web/releases/current/project_name my wsgi file home/user/webapps/kcdf.web/releases/current/project_name/apache/project_name.wsgi

    Read the article

  • Ubuntu: pip not working with python3.4

    - by val_
    Trying to get pip working on my Ubuntu pc. pip seems to be working for python2.7, but not for others. Here's the problem: $ pip Traceback (most recent call last): File "/usr/local/bin/pip", line 9, in <module> load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File "/usr/local/lib/python3.4/dist-packages/setuptools-1.1.5-py3.4.egg /pkg_resources.py", line 357, in load_entry_point def get_entry_info(dist, group, name): File "/usr/local/lib/python3.4/dist-packages/setuptools-1.1.5-py3.4.egg/pkg_resources.py", line 2394, in load_entry_point break File "/usr/local/lib/python3.4/dist-packages/setuptools-1.1.5-py3.4.egg/pkg_resources.py", line 2108, in load name = some.module:some.attr [extra1,extra2] ImportError: No module named 'pip' $ which pip /usr/local/bin/pip $ python2.7 -m pip //here can be just python, btw Usage: /usr/bin/python2.7 -m pip <command> [options] //and so on... $ python3.4 -m pip /usr/bin/python3.4: No module named pip From the home/user/.pip/pip.log : Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1431, in install requirement.uninstall(auto_confirm=True) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 598, in uninstall paths_to_remove.remove(auto_confirm) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1836, in remove renames(path, new_path) File "/usr/lib/python2.7/dist-packages/pip/util.py", line 295, in renames shutil.move(old, new) File "/usr/lib/python2.7/shutil.py", line 303, in move os.unlink(src) OSError: [Errno 13] Permission denied: '/usr/bin/pip' There's no /usr/bin/pip btw. How can I fix this issue to work with pip and python 3.4 normally? I am trying to use pycharm, but it's package manager also stucks in this problem. Thanks for attention!

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • Problem in implementing the Url Rewritting for a Zend Application

    - by Nishant
    Hello everyone, I am doing my first Zend Applicaition and I am finally done with the coding side. But the problem which I have is Client has asked to rewrite the Url's which follows the SEO and as I don't have much knowledge of the Zend Router,I am finding myself helpless this time. Please helo me out.The current Url which I have is... http://localhost.ZendProject.com/keywords/ball and Client needs it like http://localhost.ZendProject.com/keywords/ball and another Url (the search Url) http://localhost.ZendProject.com/search/trends?q=nishant+shrivastava&select=All&Search=Search and the Client wants is http://localhost.ZendProject.com/nishant-shrivastava Please help me out of this problem,because I am totally freaked out thinking of any solution and my mind is also not helping me out.

    Read the article

  • Cannot ping router with a static IP assigned?

    - by Uriah
    Alright. I am running Ubuntu LTS 12.04 and am trying to configure a local caching/master DNS server so I am using Bind9. First, here are some things via default DHCP: /etc/network/interfaces cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # The primary network interface - STATIC #auto eth0 #iface eth0 inet static # address 192.168.2.113 # netmask 255.255.255.0 # network 192.168.2.0 # broadcast 192.168.2.255 # gateway 192.168.2.1 # dns-search uclemmer.net # dns-nameservers 192.168.2.113 8.8.8.8 /etc/resolv.conf cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.2.1 search uclemmer.net ifconfig ifconfig eth0 Link encap:Ethernet HWaddr 00:14:2a:82:d4:9e inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::214:2aff:fe82:d49e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1067 errors:0 dropped:0 overruns:0 frame:0 TX packets:2504 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:153833 (153.8 KB) TX bytes:214129 (214.1 KB) Interrupt:23 Base address:0x8800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:915 errors:0 dropped:0 overruns:0 frame:0 TX packets:915 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:71643 (71.6 KB) TX bytes:71643 (71.6 KB) ping ping -c 4 192.168.2.1 PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data. 64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.368 ms 64 bytes from 192.168.2.1: icmp_req=2 ttl=64 time=0.224 ms 64 bytes from 192.168.2.1: icmp_req=3 ttl=64 time=0.216 ms 64 bytes from 192.168.2.1: icmp_req=4 ttl=64 time=0.237 ms --- 192.168.2.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.216/0.261/0.368/0.063 ms ping -c 4 google.com PING google.com (74.125.134.102) 56(84) bytes of data. 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=1 ttl=48 time=15.1 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=2 ttl=48 time=11.4 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=3 ttl=48 time=11.6 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=4 ttl=48 time=11.5 ms --- google.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 11.488/12.465/15.118/1.537 ms ip route ip route default via 192.168.2.1 dev eth0 metric 100 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.103 As you can see, with DHCP everything seems to work fine. Now, here are things with static IP: /etc/network/interfaces cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # The primary network interface - STATIC auto eth0 iface eth0 inet static address 192.168.2.113 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255 gateway 192.168.2.1 dns-search uclemmer.net dns-nameservers 192.168.2.1 8.8.8.8 I have tried dns-nameservers in various combos of *.2.1, *.2.113, and other reliable, public nameservers. /etc/resolv.conf cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.2.1 nameserver 8.8.8.8 search uclemmer.net Obviously, when I change the nameservers in the /etc/network/interfaces file, the nameservers change here too. ifconfig ifconfig eth0 Link encap:Ethernet HWaddr 00:14:2a:82:d4:9e inet addr:192.168.2.113 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::214:2aff:fe82:d49e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1707 errors:0 dropped:0 overruns:0 frame:0 TX packets:2906 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:226230 (226.2 KB) TX bytes:263497 (263.4 KB) Interrupt:23 Base address:0x8800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:985 errors:0 dropped:0 overruns:0 frame:0 TX packets:985 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:78625 (78.6 KB) TX bytes:78625 (78.6 KB) ping ping -c 4 192.168.2.1 PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data. --- 192.168.2.1 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3023ms ping -c 4 google.com ping: unknown host google.com Lastly, here are my bind zone files: /etc/bind/named.conf.options cat /etc/bind/named.conf.options options { directory "/etc/bind"; // // // query-source address * port 53; notify-source * port 53; transfer-source * port 53; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { // My local 192.168.2.113; // Comcast 75.75.75.75; 75.75.76.76; // Google 8.8.8.8; 8.8.4.4; // DNSAdvantage 156.154.70.1; 156.154.71.1; // OpenDNS 208.67.222.222; 208.67.220.220; // Norton 198.153.192.1; 198.153.194.1; // Verizon 4.2.2.1; 4.2.2.2; 4.2.2.3; 4.2.2.4; 4.2.2.5; 4.2.2.6; // Scrubit 67.138.54.100; 207.255.209.66; }; // // // //allow-query { localhost; 192.168.2.0/24; }; //allow-transfer { localhost; 192.168.2.113; }; //also-notify { 192.168.2.113; }; //allow-recursion { localhost; 192.168.2.0/24; }; //======================================================================== // If BIND logs error messages about the root key being expired, // you will need to update your keys. See https://www.isc.org/bind-keys //======================================================================== dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local cat /etc/bind/named.conf.local // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; zone "example.com" { type master; file "/etc/bind/zones/db.example.com"; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.2.168.192.in-addr.arpa"; /etc/bind/zones/db.example.com cat /etc/bind/zones/db.example.com ; ; BIND data file for example.com interface ; $TTL 604800 @ IN SOA yossarian.example.com. root.example.com. ( 1343171970 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS yossarian.example.com. @ IN A 192.168.2.113 @ IN AAAA ::1 @ IN MX 10 yossarian.example.com. ; yossarian IN A 192.168.2.113 router IN A 192.168.2.1 printer IN A 192.168.2.200 ; ns01 IN CNAME yossarian.example.com. www IN CNAME yossarian.example.com. ftp IN CNAME yossarian.example.com. ldap IN CNAME yossarian.example.com. mail IN CNAME yossarian.example.com. /etc/bind/zones/db.2.168.192.in-addr.arpa cat /etc/bind/zones/db.2.168.192.in-addr.arpa ; ; BIND reverse data file for 2.168.192.in-addr interface ; $TTL 604800 @ IN SOA yossarian.example.com. root.example.com. ( 1343171970 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS yossarian.example.com. @ IN A 255.255.255.0 ; 113 IN PTR yossarian.example.com. 1 IN PTR router.example.com. 200 IN PTR printer.example.com. ip route ip route default via 192.168.2.1 dev eth0 metric 100 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.113 I can SSH in to the machine locally at *.2.113 or at whatever address is dynamically assigned when in DHCP "mode". *2.113 is in my router's range and I have ports open and forwarding to the server. Pinging is enabled on the router too. I briefly had a static configuration working but it died after the first reboot. Please let me know what other info you might need. I am beyond frustrated/baffled.

    Read the article

  • changing default my.cnf path in mysql

    - by user377941
    I am having two mysql instances on same machine. The installations are on /usr/loca/mysql1 and /usr/local/mysql2. I m having separate my.cnf files located in /etc/mysql1 and /etc/mysql2. I installed the first instance of my sql using source distribution and with the --prefix=/usr/local/mysql1 option. The second one i got from copying and pastinf the same directory to /usr/local/mysql2. When i start the mysql daemon on /usr.local/mysql/libexec it reads the my.cnf file in /etc/mysql1. And if i start the mysql daemon in /usr/local/mysql2 it reads the same my.cnf file. I have separate port numbers and .sock files defined in the .cnf file in those 2 locations. I can read the my.cnf file in the second location by using --defaults-file=/etc/mysql2/my.cnf option on mysqld startup. I dnt need to enter this each and every time i start the daemon. If i am going to have more instances how can i point the correct my.cnf file to read to each and every mysql daemon. What is the retionale behind mysqld links with the my.cnf file. how can i predefine the location of my.cnf file for each instance.

    Read the article

  • python-iptables: Cryptic error when allowing incoming TCP traffic on port 1234

    - by Lucas Kauffman
    I wanted to write an iptables script in Python. Rather than calling iptables itself I wanted to use the python-iptables package. However I'm having a hard time getting some basic rules setup. I wanted to use the filter chain to accept incoming TCP traffic on port 1234. So I wrote this: import iptc chain = iptc.Chain(iptc.TABLE_FILTER,"INPUT") rule = iptc.Rule() target = iptc.Target(rule,"ACCEPT") match = iptc.Match(rule,'tcp') match.dport='1234' rule.add_match(match) rule.target = target chain.insert_rule(rule) However when I run this I get this thrown back at me: Traceback (most recent call last): File "testing.py", line 9, in <module> chain.insert_rule(rule) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1133, in insert_rule self.table.insert_entry(self.name, rbuf, position) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1166, in new obj.refresh() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1230, in refresh self._free() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1224, in _free self.commit() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1219, in commit raise IPTCError("can't commit: %s" % (self.strerror())) iptc.IPTCError: can't commit: Invalid argument Exception AttributeError: "'NoneType' object has no attribute 'get_errno'" in <bound method Table.__del__ of <iptc.Table object at 0x7fcad56cc550>> ignored Does anyone have experience with python-iptables that could enlighten on what I did wrong?

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >