Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 73/387 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Minimizing SQL queries using join with one-to-many relationship

    - by Brian
    So let me preface this by saying that I'm not an SQL wizard by any means. What I want to do is simple as a concept, but has presented me with a small challenge when trying to minimize the amount of database queries I'm performing. Let's say I have a table of departments. Within each department is a list of employees. What is the most efficient way of listing all the departments and which employees are in each department. So for example if I have a department table with: id name 1 sales 2 marketing And a people table with: id department_id name 1 1 Tom 2 1 Bill 3 2 Jessica 4 1 Rachel 5 2 John What is the best way list all departments and all employees for each department like so: Sales Tom Bill Rachel Marketing Jessica John Pretend both tables are actually massive. (I want to avoid getting a list of departments, and then looping through the result and doing an individual query for each department). Think similarly of selecting the statuses/comments in a Facebook-like system, when statuses and comments are stored in separate tables.

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Virtual environment firewall with CSF + iptables rules on VM?

    - by luison
    We are getting into virtualization with a Proxmox VE (OpenVZ + KVM) server. Our plan for firewall is to have CSF (http://configserver.com/cp/csf.html) running on the host machine as we've had a reasonable good experience with it in the past. Apart from that we plan simple firewall rules on the VM machines (mostly OpenVZ containers with same kernel) and maybe fail2ban simple specific rules. I would appreciate comments with anyone with similar experiences? I understand all traffic comes via the host machine so a combined firewall there with specific firewalling on the VM should work, alltough some iptables rules are hard to get to work on OpenVZ containers.

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Providing internet access to users in an Enterprise (MPLS ) Network

    - by Vivek Bernard
    Scenario I'm planning to setup a typical Head Office - Branch Office(s) Network Setup. there will be 25 branch offices in India of two will be overseas (one in US and the other in UK). All these will be connected via MPLS. Additional details: No of Concurrent users in each office is going to be 25 tranlating to 650 users The requirement is to provide "proxied" internet connectivity to the branch offices. How should I go about doing it? Plan A: Buying an internet leased line in the Head Office and distribute it through an internal proxy server to all the branches Plan B Buying separate internet lines for all the branches and setup individual proxies to all the branch offices.

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • How to get started setting up IP security cameras

    - by dave
    I have just finished renovating my house. As part of the job, I have cat6 cable run through the house, including two external plugs. All cables terminate in the same location. My rough plan is to plug two IP cameras to monitor the front and rear, run POE from a central router to the two external cameras, plug my PC into the same router and run magic software X. Any machine plugged into the router or wirelessly connection should then be able to get a live feed and alerts based on motion detection. That's the plan, but I'm not sure how possible it is. What hardware to get and what monitoring software to get. Has anyone does something similar? What were your experiences?

    Read the article

  • Is an Adnroid-based Phone a Suitable MP3 Player for Music Streamed over the Internet?

    - by James McFarland
    I am considering getting an HTC phone running Android from Verizon Wireless when I next upgrade my phone. I also have an online account with a music vendor, where I have rights to listen to my collection, but not download the MP3s. Further, I have an unlimited data plan and Wi-Fi, so I have full access to bandwidth volume without any concerns. I am especially interested in mounting my phone in a car kit, and streaming my online music to my car's sound system while driving. If you are experienced in this scenario, or have tried this scenario - Is is reasonable to expect my HTC Android phone to provide me with streaming music via my cell data plan anywhere I get cell service?

    Read the article

  • How to access GMAIL for storage in custom CRM SQL Server DB?

    - by Optimal Solutions
    I have a client who wants his custom-written CRM to be able to access his sales people's emails so that, effectively, a history of email conversations between customer and salesperson is stored inside the CRM's database. The CRM is written in VB 2008 and the database is SQL Server 2008. The only email these people use, in the shop and on the road, is GMAIL. Each sales person has their own GMAIL address. Thats how they operate. If they're on the road and respond to a customer's email inquiry about a product, they would like that email conversation to be stored in a table in the database. I think thats the part I cant wrap my head around. How to get access to the email data (knowing the user id and password) and doing so from VB 2008

    Read the article

  • SGE: downtime planning

    - by mousee
    I need to plan a downtime for a maintenance of my environment (or some part of my environment) by means of Sun Grid Engine. Is it possible to somehow use backfilling information to tell the grid engine to plan only those jobs on cluster which are able to finish (i have backfilling information) till let's say 10 am next day? Can I then at 10 am rely on the fact that all compute nodes are clean, jobs are only queued, no job is planned and so that I can start maintenance? Thank you for your time. mousee

    Read the article

  • database table in Magento does not exist: sales_flat_shipment_grid

    - by dene
    We're using Magento 1.4.0.1 and want to use an extension from a 3rd party developer. The extension does not work, because of a join to the table "sales_flat_shipment_grid": $collection = $model->getCollection()->join('sales/shipment_grid', 'increment_id=shipment', array('order_increment_id'=>'order_increment_id', 'shipping_name' =>'shipping_name'), null,'left'); Unfortunately this table does not exist n our database. So the error "Can't retrieve entity config: sales/shipment_grid" appears. If I comment this part out, the extension is working, but I guess, it does not proper work. Does anybody know something about this table? There are a backend-option for the catalog to use the "flat table" option, but this is only for the catalog. And the tables already exists, no matter which option is checked. Thank you a lot! :-)

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Access Denied on LAN IIS Access via Integrated Authentication

    - by Pharao2k
    I have an IIS 7.5 (Win2k8R2) Webserver, which publishes an UNC Share (on a Fileserver) with restricted access. The AppPool Identity is a Domain User-Account with read access to mentioned UNC path. Authentication modes are set to Anonymous and Integration Authentication. When I access the path via localhost from the Webserver itself, it works, but if I try the Hostname or IP from either the Webserver or a Client, I get three authentication prompts (does not accept my credentials) and a 401.3 Unauthorized error message (but it states that I am logged in as my normal credentials which definitely have access rights to the UNC path and its files). Security Zone is set to Local Intranet. Sysiniternals Process Monitor lists CreateFile operations on the UNC path (and other existing files in it) with Access Denied and Impersonating on the correct credentials. I don't understand why it is not working, it seems to use the correct credentials on every step on the way but fails with is operations.

    Read the article

  • Not enough storage is available to process this command

    - by Mohit
    I am getting this error on almost all of the operations on a Windows 7 pro 32 bit machine. By operations I mean anything I do. Update a repo from subversion. Access a local IIS Site. Copy a big folder. Run an installer.and sometime if I try again. It get solved. I think there is something wrong wit windows7 . I searched around and found posts suggesting to increase IRPStackSize value in registry I did that no Luck. I am using Microsoft Security Essentials Version: 1.0.1961.0 as my antivirus package Once this errors starts popping up. I have to restart and then in after some random time. It starts showing up again. Any help is appreciated. I am losing lot of my time in restarting my system or retrying again and again.

    Read the article

  • What should I know before considering a VPS or dedicated server?

    - by Corey Sarnia
    I have a plan for the future for an application and web service. The client will have an application that will send requests to a server-side Java back-end that will process requests, and the server should also be able to host a website, preferably on a WAMP setup (which is what I'm used to; very little *nix knowledge). Now, I cannot provide any hard stats because this is only a plan that's in a discussion stage. However, we do fully expect it will scale enough to need some type of dedicated hosting. My question is this: what types of things should I know about before looking into getting hosting? What should I be asking the hosting providers before I decide on a purchase? When is it appropriate to switch from a VPS to a fully dedicated server?

    Read the article

  • cant make outbound calls - asterisk

    - by deanvz
    I have a basic Atcom IP01 with the following config Registered Voip (SIP) Trunk Registered Voip Phone - ext Dial Plan Outbound Call rule I made use of this manual that the manufacturer supplies: http://www.atcom.cn/cn/download/pbx/ip01/ATCOM%20IP01-User%20Manual-V1.0-EN.pdf Whenever I try and make a call, it seems that the outbound call rule that i defined does not get regarded as the default rule even though the dial plan lists this as the only outbound call rule. When dialling I see in the log file the following [Jan 1 09:10:07] NOTICE[176]: chan_sip.c:14377 handle_request_invite: Call from '6001' to extension '00765243679' rejected because extension not found. The 00765243679 is a cellular number. Am I missing a configuration in order to make outbound calls? Land line, other Voip numbers and cellular calls have been tried

    Read the article

  • Is Sql Azure useful without windows azure?

    - by KallDrexx
    I am currently doing some research to get some preliminary IT cost projections for a project, and I was looking at Azure. Since this is a startup, I do not want to deal with the IT operations myself and instead am looking at having it all professionally hosted. I am looking at azure due to the SLA assurances, already in place disaster recovery operations, and the reliability. I'm playing with some numbers, and I am wondering if hosting my database on Sql Azure is an option, while hosting the actual webpage on another host until I need the frontend scalability of Azure. Is this actually feasible or will the latency in requests between the web host and azure be too much and I would be better off hosting both on the same service?

    Read the article

  • SSRS Column Grouping with specific order

    - by AmiT
    Hi Experts, Is it possible to change order of records/groups in a result-set from a query using Group By? =I have a query: SELECT Category, Subcategory, ProductName, CreatedDate, Sales From TableCategory tc INNER JOIN TableSubCategory ts ON tc.col1 = ts.col2 INNER JOIN TableProductName tp ON ts.col2 = tp.col3 Group By Category, SubCategory, ProductName, CreatedDate, Sales = Now, I am creating a ssrs report where Category is Primary row group, then SubCategory is its child row group. Then ProductName is a Primary Column Group. It works perfect, But it shows the ProductNames in alphabatic order. I want it to show the ProductNames in custom order(defined by me).Like, ProductNo5 in 3rd column, ProductNo8 in 4th column, ProductNo1 in 5th column ... and so on!

    Read the article

  • Intellisense in Visual Studio 2010

    - by Erik
    For some reason the Intellisense in vb.net stopped working when I use an Aggregate Lambda expression inside a With statement. With Me.SalesPackage .WebLinks = Sales.Where(Function(f) f.Current.BookerWeb > 0).Count .WebAmount = Aggregate o In Sales.Where(Function(f) f.Current.WebBooker > 0) Into Sum(o.Current.WebPrice) End With If I insert a new line between .WebLinks and .WebAmount and start typing, it works. But it won't work if I do it after the Aggregate statement... Any ideas?

    Read the article

  • Writing OLAP SQL query

    - by user1859596
    I have a project I am working on that requires the following : create a normalized sample rdbms (5 tables) using Java I entered 1 million rows of data to each table run two OLTP and two OLAP queries on the normalized tables. Denormalized tables. run the same OLTP and OLAP queries on them and compare time. What does OLAP query mean? I've searched the internet and all that I can find is that I have to make a cube, and apply queries on it. How can I write an OLAP query on a RDBMS? I have a sample : tables normalized(orders,product,customer,branch,sales) sales : order_id,product_id,quantity product : product_id,name,description,price,sales_tax customer : customer_id,f_name,l_name,tel_no,addr,nic,city branch : branch_id,name,tel_no,addr,city orders : order_id,customer_id,order_date,branch_id I want to write an OLAP query on the above tables. I am using Oracle Express with SQL Developer.

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • MYSQL - How to increment fields in one row with values from another row

    - by Walker Boh
    I have a table that we'll call 'Sales' with 4 rows: uid, date, count and amount. I want to increment the count and amount values for one row with the count/amount values from a different row in that table. Example: UID | Date | Count | Amount| 1 | 2013-06-20 | 1 | 500 | 2 | 2013-06-24 | 2 | 1000 | Ideal results would be uid 2's count/amount values being incremented by uid 1's values: UID | Date | Count | Amount| 1 | 2013-06-20 | 1 | 500 | 2 | 2013-06-24 | 3 | 1500 | Please note that my company's database is an older version of MYSQL (3.something) so subqueries are not possible. I am curious to know if this is possible outside of doing an "update sales set count = count + 1" and likewise for the amount columns. I have a lot of rows to update and incrementing the values individually is quite time consuming if you can imagine. Thanks for any help or suggestions!

    Read the article

  • How can I incrementally backup a large amount of data [with rsync]?

    - by Annan
    A website contains ~40GB images + files which needs to be backed up. Rollbacks need to be possible daily for the last 30 days. And backup server < 1.2TB My idea is to have one full backup from 30 days ago, then incremental backups for the last 30 days. On each day the last incremental backup is combined with the full backup and a new incremental backup is added. Can this strategy be implemented with rsync, if so how? Are there any problems with this plan? A better plan? PS: Incremental backups, not backup incrementally (which rsync does automatically)

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >