Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 41/441 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Streaming media from linux server - low footprint is crucial

    - by Mike Haye
    I recently pre-ordered the Raspberry Pi. http://www.raspberrypi.org/faqs For those of you who don't know it, it's a machine with 256 mb ram and a 700 MHz processor for $35. I plan to run linux on an SD card on this machine and have it act as both a htpc, VPN and media server. In regard to the media server part, I need to find some linux software that has a small footprint, but allows me to stream media to other devices connected to the internet (preferably without having to install any additional software on the client machines) Also, I would love if the video could be compressed, so the data usage wouldn't be so big for the client machine (e.g. when I'm using my data plan on my smartphone ;) ) Thanks in advance for any answers :) Mike.

    Read the article

  • Compare associations between domain objects in Grails

    - by user303979
    I am not sure if I am going about this the best way, but I will try to explain what I am trying to do. I have the following domain classes class User { static hasMany = [goals: Goal] } So each User has a list of Goal objects. I want to be able to take an instance of User and return 5 Users with the highest number of matching Goal objects (with the instance) in their goals list. Can someone kindly explain how I might go about doing this?

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Virtual environment firewall with CSF + iptables rules on VM?

    - by luison
    We are getting into virtualization with a Proxmox VE (OpenVZ + KVM) server. Our plan for firewall is to have CSF (http://configserver.com/cp/csf.html) running on the host machine as we've had a reasonable good experience with it in the past. Apart from that we plan simple firewall rules on the VM machines (mostly OpenVZ containers with same kernel) and maybe fail2ban simple specific rules. I would appreciate comments with anyone with similar experiences? I understand all traffic comes via the host machine so a combined firewall there with specific firewalling on the VM should work, alltough some iptables rules are hard to get to work on OpenVZ containers.

    Read the article

  • Providing internet access to users in an Enterprise (MPLS ) Network

    - by Vivek Bernard
    Scenario I'm planning to setup a typical Head Office - Branch Office(s) Network Setup. there will be 25 branch offices in India of two will be overseas (one in US and the other in UK). All these will be connected via MPLS. Additional details: No of Concurrent users in each office is going to be 25 tranlating to 650 users The requirement is to provide "proxied" internet connectivity to the branch offices. How should I go about doing it? Plan A: Buying an internet leased line in the Head Office and distribute it through an internal proxy server to all the branches Plan B Buying separate internet lines for all the branches and setup individual proxies to all the branch offices.

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • How to get started setting up IP security cameras

    - by dave
    I have just finished renovating my house. As part of the job, I have cat6 cable run through the house, including two external plugs. All cables terminate in the same location. My rough plan is to plug two IP cameras to monitor the front and rear, run POE from a central router to the two external cameras, plug my PC into the same router and run magic software X. Any machine plugged into the router or wirelessly connection should then be able to get a live feed and alerts based on motion detection. That's the plan, but I'm not sure how possible it is. What hardware to get and what monitoring software to get. Has anyone does something similar? What were your experiences?

    Read the article

  • Is an Adnroid-based Phone a Suitable MP3 Player for Music Streamed over the Internet?

    - by James McFarland
    I am considering getting an HTC phone running Android from Verizon Wireless when I next upgrade my phone. I also have an online account with a music vendor, where I have rights to listen to my collection, but not download the MP3s. Further, I have an unlimited data plan and Wi-Fi, so I have full access to bandwidth volume without any concerns. I am especially interested in mounting my phone in a car kit, and streaming my online music to my car's sound system while driving. If you are experienced in this scenario, or have tried this scenario - Is is reasonable to expect my HTC Android phone to provide me with streaming music via my cell data plan anywhere I get cell service?

    Read the article

  • SGE: downtime planning

    - by mousee
    I need to plan a downtime for a maintenance of my environment (or some part of my environment) by means of Sun Grid Engine. Is it possible to somehow use backfilling information to tell the grid engine to plan only those jobs on cluster which are able to finish (i have backfilling information) till let's say 10 am next day? Can I then at 10 am rely on the fact that all compute nodes are clean, jobs are only queued, no job is planned and so that I can start maintenance? Thank you for your time. mousee

    Read the article

  • How do I check SQLite3 syntax?

    - by Benjamin Oakes
    Is there a way to check the syntax of a SQLite3 script without running it? Basically, I'm looking for the SQLite3 equivalent of ruby -c script.rb, perl -c script.pl, php --syntax-check script.php, etc. I've thought of using explain, but most of the scripts I'd like to check are kept around for reference purposes (and don't necessarily have an associated database). Using explain would also make it hard to use with something like Syntastic. (That is, I'm only wanting to check syntax, not semantics.)

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • How does a javascript closure work ?

    - by e-satis
    Like the old Albert said : "If you can't explain it to a six-year old, you really don't understand it yourself.”. Well I tried to explain JS closures to a 27 years old friend and completely failed. Can anybody consider than I am 6 and strangely interested in that subject ? EDIT : I have seen the scheme example given in SO, and it did not help.

    Read the article

  • What should I know before considering a VPS or dedicated server?

    - by Corey Sarnia
    I have a plan for the future for an application and web service. The client will have an application that will send requests to a server-side Java back-end that will process requests, and the server should also be able to host a website, preferably on a WAMP setup (which is what I'm used to; very little *nix knowledge). Now, I cannot provide any hard stats because this is only a plan that's in a discussion stage. However, we do fully expect it will scale enough to need some type of dedicated hosting. My question is this: what types of things should I know about before looking into getting hosting? What should I be asking the hosting providers before I decide on a purchase? When is it appropriate to switch from a VPS to a fully dedicated server?

    Read the article

  • cant make outbound calls - asterisk

    - by deanvz
    I have a basic Atcom IP01 with the following config Registered Voip (SIP) Trunk Registered Voip Phone - ext Dial Plan Outbound Call rule I made use of this manual that the manufacturer supplies: http://www.atcom.cn/cn/download/pbx/ip01/ATCOM%20IP01-User%20Manual-V1.0-EN.pdf Whenever I try and make a call, it seems that the outbound call rule that i defined does not get regarded as the default rule even though the dial plan lists this as the only outbound call rule. When dialling I see in the log file the following [Jan 1 09:10:07] NOTICE[176]: chan_sip.c:14377 handle_request_invite: Call from '6001' to extension '00765243679' rejected because extension not found. The 00765243679 is a cellular number. Am I missing a configuration in order to make outbound calls? Land line, other Voip numbers and cellular calls have been tried

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • How can I incrementally backup a large amount of data [with rsync]?

    - by Annan
    A website contains ~40GB images + files which needs to be backed up. Rollbacks need to be possible daily for the last 30 days. And backup server < 1.2TB My idea is to have one full backup from 30 days ago, then incremental backups for the last 30 days. On each day the last incremental backup is combined with the full backup and a new incremental backup is added. Can this strategy be implemented with rsync, if so how? Are there any problems with this plan? A better plan? PS: Incremental backups, not backup incrementally (which rsync does automatically)

    Read the article

  • Creating a seperate static content site for IIS7 and MVC

    - by JK01
    With reference to this serverfault blog post: A Few Speed Improvements where it talks about how static content for stackexchange is served from a separate cookieless domain... How would someone go about doing this on IIS7.5 for a ASP.NET MVC site? The plan so far: Register domain eg static.com, create a new website in IIS Manually copy the js / css / images folders from MVC as is so that they have the same paths on the new server Enable IIS gzip settings (js/css = high compression, images = none) Set caching with far future expiry dates <clientCache cacheControlCustom="public" /> in the web.config Never set any cookies on the static.com site Combine and minimize js / css Auto deploy changes in static content with WebDeploy Is this plan correct? And how can you use WebDeploy to deploy the whole web app to one server and then only the static items to another? I can see there is a similar question, but for apache: Creating a cookie-free domain to serve static content so it doesn't apply

    Read the article

  • Strange use of the index in Mysql

    - by user309067
    explain SELECT feed_objects.* FROM feed_objects WHERE (feed_objects.feed_id IN (165,160,159,158,157,153,152,151,150,149,148,147,129,128,127,126,125,124,122,121,120,119,118,117,116,115,114,113,111,110)) ; +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | feed_objects | ALL | by_feed_id | NULL | NULL | NULL | 188 | Using where | +----+-------------+--------------+------+---------------+------+---------+------+------+-------------+ Not used index 'by_feed_id' But when I point less than the values in the "WHERE" - everything is working right explain SELECT feed_objects.* FROM feed_objects WHERE (feed_objects.feed_id IN (165,160,159,158,157,153,152,151,150,149,148,147,129,128,127,125,124)) ; +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ | 1 | SIMPLE | feed_objects | range | by_feed_id | by_feed_id | 9 | NULL | 18 | Using where | +----+-------------+--------------+-------+---------------+------------+---------+------+------+-------------+ Used index 'by_feed_id' What is the problem?

    Read the article

  • Simplifying const Overloading?

    - by templatetypedef
    Hello all- I've been teaching a C++ programming class for many years now and one of the trickiest things to explain to students is const overloading. I commonly use the example of a vector-like class and its operator[] function: template <typename T> class Vector { public: T& operator[] (size_t index); const T& operator[] (size_t index) const; }; I have little to no trouble explaining why it is that two versions of the operator[] function are needed, but in trying to explain how to unify the two implementations together I often find myself wasting a lot of time with language arcana. The problem is that the only good, reliable way that I know how to implement one of these functions in terms of the other is with the const_cast/static_cast trick: template <typename T> const T& Vector<T>::operator[] (size_t index) const { /* ... your implementation here ... */ } template <typename T> T& Vector<T>::operator[] (size_t index) { return const_cast<T&>(static_cast<const Vector&>(*this)[index]); } The problem with this setup is that it's extremely tricky to explain and not at all intuitively obvious. When you explain it as "cast to const, then call the const version, then strip off constness" it's a little easier to understand, but the actual syntax is frightening,. Explaining what const_cast is, why it's appropriate here, and why it's almost universally inappropriate elsewhere usually takes me five to ten minutes of lecture time, and making sense of this whole expression often requires more effort than the difference between const T* and T* const. I feel that students need to know about const-overloading and how to do it without needlessly duplicating the code in the two functions, but this trick seems a bit excessive in an introductory C++ programming course. My question is this - is there a simpler way to implement const-overloaded functions in terms of one another? Or is there a simpler way of explaining this existing trick to students? Thanks so much!

    Read the article

  • Efficiently installing fully-patched Windows XP, IE, and Office 2007 on an isolated PC

    - by JPaget
    I have been tasked to install Windows XP, IE, and Office 2007 on a computer that will become part of a standalone network not connected to the Internet. What is a good way to install all of the security updates? I'm installing from CD's of Windows XP SP2 and MS Office 2007. Next I plan to download Windows XP SP3 and Office 2007 SP2, burn them to CD's, and install both service packs. Finally I plan to go to the Microsoft Download Center and download all applicable security updates, burn then to CD, and install them. I estimate that there are over 100 of these updates. Is there a more efficient way to do this?

    Read the article

  • What is the best Linux distro for a php web server? [on hold]

    - by benjisail
    We are planning to upgrade our hardware and at the same time we plan to reinstall all our web server from a fresh OS. Currently our web server is running on CentOS 4.7 on a dedicated server. We are using Apache, Mysql, PHP, SVN, FTP and all the needed tools for a web server managed through SSH. We plan to use a cloud server for the new web server. I don't know which Linux distro to take for this new server. Should I stay with Centos and just take the latest release 5.4 or should I switch to something else like a Debian base distro (Ubuntu Server)? The thing that I didn't like with CentOS was the none availability of the latest version of PHP and Apache on Yum. This make it harder to keep our webserver updated with the latest technologies... Thanks for your help!

    Read the article

  • Why is SQLite3 using covering indices instead of the indices I created?

    - by Geoff
    I have an extremely large database (contacts has ~3 billion entries, people has ~280 million entries, and the other tables have a negligible number of entries). Most other queries I've run are really fast. However, I've encountered a more complicated query that's really slow. I'm wondering if there's any way to speed this up. First of all, here is my schema: CREATE TABLE activities (id INTEGER PRIMARY KEY, name TEXT NOT NULL); CREATE TABLE contacts ( id INTEGER PRIMARY KEY, person1_id INTEGER NOT NULL, person2_id INTEGER NOT NULL, duration REAL NOT NULL, -- hours activity_id INTEGER NOT NULL -- FOREIGN_KEY(person1_id) REFERENCES people(id), -- FOREIGN_KEY(person2_id) REFERENCES people(id) ); CREATE TABLE people ( id INTEGER PRIMARY KEY, state_id INTEGER NOT NULL, county_id INTEGER NOT NULL, age INTEGER NOT NULL, gender TEXT NOT NULL, -- M or F income INTEGER NOT NULL -- FOREIGN_KEY(state_id) REFERENCES states(id) ); CREATE TABLE states ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, abbreviation TEXT NOT NULL ); CREATE INDEX activities_name_index on activities(name); CREATE INDEX contacts_activity_id_index on contacts(activity_id); CREATE INDEX contacts_duration_index on contacts(duration); CREATE INDEX contacts_person1_id_index on contacts(person1_id); CREATE INDEX contacts_person2_id_index on contacts(person2_id); CREATE INDEX people_age_index on people(age); CREATE INDEX people_county_id_index on people(county_id); CREATE INDEX people_gender_index on people(gender); CREATE INDEX people_income_index on people(income); CREATE INDEX people_state_id_index on people(state_id); CREATE INDEX states_abbreviation_index on states(abbreviation); CREATE INDEX states_name_index on states(name); Note that I've created an index on every column in the database. I don't care about the size of the database; speed is all I care about. Here's an example of a query that, as expected, runs almost instantly: SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; Here's the troublesome query: SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); Now, what I think would happen is that each subquery would use each relevant index I've created to speed this up. However, when I show the query plan, I see this: sqlite> EXPLAIN QUERY PLAN SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); 0|0|0|SEARCH TABLE contacts USING INTEGER PRIMARY KEY (rowid=?) (~25 rows) 0|0|0|EXECUTE LIST SUBQUERY 1 2|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 2|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 2|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person1_id_index (person1_id=?) (~12 rows) 3|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 3|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 3|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person2_id_index (person2_id=?) (~12 rows) 1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (INTERSECT) In fact, if I show the query plan for the first query I posted, I get this: sqlite> EXPLAIN QUERY PLAN SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; 0|0|1|SEARCH TABLE states USING COVERING INDEX states_abbreviation_index (abbreviation=?) (~1 rows) 0|1|0|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) Why is SQLite using covering indices instead of the indices I created? Shouldn't the search in the people table be able to happen in log(n) time given state_id which in turn is found in log(n) time?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >