Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 44/387 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • SQL Server: How to remove empty lines in SSMS?

    - by atricapilla
    I have many .sql files with lots of empty lines e.g. WITH cteTotalSales (SalesPersonID, NetSales) AS ( SELECT SalesPersonID, ROUND(SUM(SubTotal), 2) FROM Sales.SalesOrderHeader WHERE SalesPersonID IS NOT NULL GROUP BY SalesPersonID ) SELECT sp.FirstName + ' ' + sp.LastName AS FullName, sp.City + ', ' + StateProvinceName AS Location, ts.NetSales FROM Sales.vSalesPerson AS sp INNER JOIN cteTotalSales AS ts ON sp.BusinessEntityID = ts.SalesPersonID ORDER BY ts.NetSales DESC Is ther a way to remove these empty lines in SQL Server Management Studio? This is what I would like to have: WITH cteTotalSales (SalesPersonID, NetSales) AS ( SELECT SalesPersonID, ROUND(SUM(SubTotal), 2) FROM Sales.SalesOrderHeader WHERE SalesPersonID IS NOT NULL GROUP BY SalesPersonID ) SELECT sp.FirstName + ' ' + sp.LastName AS FullName, sp.City + ', ' + StateProvinceName AS Location, ts.NetSales FROM Sales.vSalesPerson AS sp INNER JOIN cteTotalSales AS ts ON sp.BusinessEntityID = ts.SalesPersonID ORDER BY ts.NetSales DESC

    Read the article

  • How to rewrite Collection?

    - by latvian
    Hi, I would like to rewrite the collection that is returned by Mage::getResourceModel('sales/order_collection'); My goal is to rewrite this resource so that i can filter out the collection for particular Store. Any ideas on how to do it? I tried directly rewrite collection of the sales/order module but no success. I was able to rewrite sales/order itself but not the collection, because when i call getCollection() it returns "Fatal error: Call to undefined method Mage_Sales_Model_Mysql4_Order::getCollection() " Any idea will help. Thank you, Margots

    Read the article

  • Error handling in C++, constructors vs. regular methods

    - by Dennis Ritchie
    I have a cheesesales.txt CSV file with all of my recent cheese sales. I want to create a class CheeseSales that can do things like these: CheeseSales sales("cheesesales.txt"); //has no default constructor cout << sales.totalSales() << endl; sales.outputPieChart("piechart.pdf"); The above code assumes that no failures will happen. In reality, failures will take place. In this case, two kinds of failures could occur: Failure in the constructor: The file may not exist, may not have read-permissions, contain invalid/unparsable data, etc. Failure in the regular method: The file may already exist, there may not be write access, too little sales data available to create a pie chart, etc. My question is simply: How would you design this code to handle failures? One idea: Return a bool from the regular method indicating failure. Not sure how to deal with the constructor. How would seasoned C++ coders do these kinds of things?

    Read the article

  • Rails: Multiple "types" of one model through related models?

    - by neezer
    I have a User model in my app, which I would like to store basic user information, such as email address, first and last name, phone number, etc. I also have many different types of users in my system, including sales agents, clients, guests, etc. I would like to be able to use the same User model as a base for all the others, so that I don't have to include all the fields for all the related roles in one model, and can delegate as necessary (cutting down on duplicate database fields as well as providing easy mobility from changing one user of one type to another). So, what I'd like is this: User -- first name -- last name -- email --> is a "client", so ---- client field 1 ---- client field 2 ---- client field 3 User -- first name -- last name -- email --> is a "sales agent", so ---- sales agent field 1 ---- sales agent field 2 ---- sales agent field 3 and so on... In addition, when a new user signs up, I want that new user to automatically be assigned the role of "client" (I'm talking about database fields here, not authorization, though I hope to eventually include this logic in my user authorization as well). I have a multi-step signup wizard I'm trying to build with wizardly. The first step is easy, since I'm simply calling the fields included in the base User model (such as first_name and email), but the second step is trickier since it should be calling in fields from the associated model (like--per my example above--the model client with fields client_field_1 or client_field_2, as if those fields were part of User). Does that make sense? Let me know if that wasn't clear at all, and I'll try to explain it in a different way. Can anyone help me with this? How would I do this?

    Read the article

  • How to split records per hour in order to display them as a chart?

    - by Axel
    Hi, I have an SQL table like this : sales(product,timestamp) I want to display a chart using Open Flash Chart but i don't know how to get the total sales per hour within the last 12 hours. ( the timestamp column is the sale date ) By example i will end up with an array like this : array(12,5,8,6,10,35,7,23,4,5,2,16) every number is the total sales in each hour. Note: i want to use php or only mysql for this. Thanks

    Read the article

  • NHibernate. Initiate save collection at saving parent

    - by Andrew Kalashnikov
    Hello, colleagues. I've got a problem at saving my entity. MApping: ?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Clients.Core" namespace="Clients.Core.Domains"> <class name="Sales, Clients.Core" table='sales'> <id name="Id" unsaved-value="0"> <column name="id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"> <column name="guid"/> </property> <set name="Accounts" table="sales_users" lazy="false"> <key column="sales_id" /> <element column="user_id" type="Int32" /> </set> Domain: public class Sales : BaseDomain { ICollection<int> accounts = new List<int>(); public virtual ICollection<int> Accounts { get { return accounts; } set { accounts = value; } } public Sales() { } } When I save Sales object Account collection don't save at sales_users table. What should I do for saving it? Please don't advice me use classes inside List Thanks a lot.

    Read the article

  • .htaccess mod_rewrite URL query

    - by 1001001
    I was hoping someone could help me out. I'm building a CRM application and need help modifying the .htaccess file to clean up the URLs. I've read every post regarding .htaccess and mod_rewrite and I've even tried using http://www.generateit.net/mod-rewrite/ to obtain the results with no success. Here is what I am attempting to do. Let's call the base URL www.domain.com We are using php with a mysql back-end and some jQuery and javascript In that "root" folder is my .htaccess file. I'm not sure if I need a .htaccess file in each subdirectory or if one in the root is enough. We have several actual directories of files including "crm", "sales", "finance", etc. First off we want to strip off all the ".php" extensions which I am able to do myself thanks to these posts. However, the querying of the company and contact IDs are where I am stuck. Right now if I load www.domain.com/crm/companies.php it displays all the companies in a list. If I click on one of the companies it uses javascript to call a "goto_company(x)" jQuery script that writes a form and submit that form based on the ID (x) of the company. This works fine and keeps the links clean as all the end user sees is www.domain.com/crm/company.php. However you can't navigate directly to a company. So we added a few lines in PHP to see if the POST is null and try a GET instead allowing us to do www.domain.com/crm/company.php?companyID=40 which displays company #40 out of the database. I need to rewrite this link, and all other associated links to www.domain.com/crm/company/40 I've tried everything and nothing seems to work. Keep in mind that I need to do this for "contacts" and also on the sales portion of the app will need to do something for "deals". To summarize here's what I am looking to do: Change www.domain.com/crm/dash.php to www.domain.com/crm/dash Change www.domain.com/crm/company.php?companyID=40 to www.domain.com/crm/company/40 Change www.domain.com/crm/contact.php?contactID=27 to www.domain.com/crm/contact/27 Change www.domain.com/sales/dash.php to www.domain.com/sales/dash Change www.domain.com/sales/deal.php?dealID=6 to www.domain.com/sales/deal/6 (40, 27, and 6 are just arbitrary numbers as examples) Just for reference, when I used the generateit.net/mod-rewrite site using www.domain.com/crm/company.php?companyID=40 as an example, here is what it told me to put in my .htaccess file: Options +FollowSymLinks RewriteEngine On RewriteRule ^crm/company/([^/]*)$ /crm/company.php?companyID=$1 [L] Needless to say that didn't work.

    Read the article

  • mysql select where count = 0

    - by david parloir
    Hi, In my db, I have a "sales" table and a "sales_item". Sometimes, something goes wrong and the sale is recorded but not the sales item's. So I'm trying to get the salesID from my table "sales" that haven't got any rows in the sales_item table. Here's the mysql query I thought would work, but it doesn't: SELECT s.* FROM sales s NATURAL JOIN sales_item si WHERE s.date like '" . ((isset($_GET['date'])) ? $_GET['date'] : date("Y-m-d")) . "%' AND v.sales_id like '" . ((isset($_GET['shop'])) ? $_GET['shop'] : substr($_COOKIE['shop'], 0, 3)) ."%' HAVING count(si.sales_item_id) = 0; Any thoughts?

    Read the article

  • Python Pandas operate on row

    - by wuha
    Hi my dataframe look like: Store,Dept,Date,Sales 1,1,2010-02-05,245 1,1,2010-02-12,449 1,1,2010-02-19,455 1,1,2010-02-26,154 1,1,2010-03-05,29 1,1,2010-03-12,239 1,1,2010-03-19,264 Simply, I need to add another column called '_id' as concatenation of Store, Dept, Date like "1_1_2010-02-05", I assume I can do it through df['id'] = df['Store'] +'' +df['Dept'] +'_'+df['Date'], but it turned out to be not. Similarly, i also need to add a new column as log of sales, I tried df['logSales'] = math.log(df['Sales']), again, it did not work.

    Read the article

  • MTD Expression on a single column - SSRS

    - by Eric
    I need a bit of help here. I have been unable to create an 'Month To Date' expression to a single column on SSRS. I tested the following expression from a similar question in the forum, but it gives me a squiggly line below the variable 'd' =IIF(Fields!CreateDate.Value >= DateAdd(d,-7,Today()), Sum(Fields!Sales.Value), 0) If I run it, of course I got an error telling me that 'd' is not declared. ;) I changed it to ... DateAdd("d",-7,Today()), Sum(Fields!Sales.Value) ... following the example and the squiggly goes below the brackets of "today()" and needless to say it...but still not working. I tried a Dateadd(mm..Datediff ... and still nothing. My report has the following columns: Country | CustomerName | Sales | InvNotProcessed | Open Order | Orders | TotalbyCust What I need is to show the new MTD sales only in the column named "Sales" while the other three shows the rest of the query, which should be open as some orders may take quite a while to manufacture and invoice. the last column sums the totals of all other columns. Any help will be much appreciated. Regards, Eric

    Read the article

  • RESTful web service, PUTting an unnamed resource?

    - by James L
    I have a back-end service that creates unique identifiers for resources. The general idea is that resources are saved and versioned, so you can perform: GET http://service/sales/targets/7818181919/latest or GET http://service/sales/targets/7818181919/4 for version 4, and so on. My question is about the most correct way to upload these resources in the first place. How about: PUT http://service/sales/targets/ returning 303 See other /service/sales/targets/ It seems a little wrong as you should PUT and GET from exactly the same place using a resource-oriented interface, but I can't think of a better option. Any ideas?

    Read the article

  • troubles creating a List of doubles from a list of objects

    - by Michel
    Hi, i have a list with objects. The object has a property 'Sales' which is a string. Now i want to create a list of doubles with the values of all objects' 'Sales' properties. I tried this: var tmp = from n in e.Result select new{ Convert.ToDouble ( n.Sales) }; but this gives me this error: Error 106 Invalid anonymous type member declarator. Anonymous type members must be declared with a member assignment, simple name or member access.

    Read the article

  • nonparametric regression method using R

    - by user1782652
    I need to find the certain driver variables for unit sales & their impact on sales. My data is such data the error does not follow normal distribution & the unit sales is also not following any particular statistical distribution. Given such condition, it is difficult for me to use simple liner regression or GLM. Can any of you please suggest me some non parametric regression technique which I can use in R to model the relationship? Thanks,

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Tömörítés becslése - Compression Advisor

    - by lsarecz
    Az Oracle Database 11g verziójától már OLTP adatbázisok is hatékonyan tömöríthetok az Advanced Compression funkcióval. Nem csak a tárolandó adatok mennyisége csökken ezáltal felére, vagy akár negyedére, de az adatbázis teljesítménye is javulhat, amennyiben I/O korlátos a rendszer (és általában az). Hogy pontosan mekkora tömörítés várható az Advanced Compression bevezetésével, az kiválóan becsülheto a Compression Advisor eszközzel. Ez nem csak az OLTP tömörítés mértékét, de 11gR2 verziótól kezdve a HCC tömörítés arányát is becsülni tudja, amely Exadata Database Machine, Pillar Axiom illetve ZFS Storage alkalmazásával érheto el. A HCC tömörítés becsléséhez csak 11gR2 adatbázisra van szükség, nem kell hozzá a speciális célhardver (Exadata, Pillar, ZFS). A Compression Advisor valójában a DBMS_COMPRESSION package használatával érheto el. A package-hez tartozik 6 konstans, amellyel a kívánt tömörítési szintek választhatók ki: Constant Type Value Description COMP_NOCOMPRESS NUMBER 1 No compression COMP_FOR_OLTP NUMBER 2 OLTP compression COMP_FOR_QUERY_HIGH NUMBER 4 High compression level for query operations COMP_FOR_QUERY_LOW NUMBER 8 Low compression level for query operations COMP_FOR_ARCHIVE_HIGH NUMBER 16 High compression level for archive operations COMP_FOR_ARCHIVE_LOW NUMBER 32 Low compression level for archive operations A GET_COMPRESSION_RATIO tárolt eljárás elemzi a tömöríteni kívánt táblát. Mindig csak egy táblát, vagy opcionálisan annak egy partícióját tudja elemezni úgy, hogy a tábláról készít egy másolatot egy külön erre a célra kijelölt/létrehozott táblatérre. Amennyiben az elemzést egyszerre több tömörítési szintre futtatjuk, úgy a tábláról annyi másolatot készít. A jó közelítésu becslés (+-5%) feltétele, hogy táblánként/partíciónként minimum 1 millió sor legyen. 11gR1 esetében még a DBMS_COMP_ADVISOR csomag GET_RATIO eljárása volt használatos, de ez még nem támogatta a HCC becslést. Érdemes még megnézni és kipróbálni a Tyler Muth blogjában publikált formázó eszközt, amivel a compression advisor kimenete alakítható jól értelmezheto formátumúvá. Végül összegezném mit is tartalmaz az Advanced Compression opció, mivel gyakran nem világos a felhasználóknak miért kell fizetni: Data Guard Network Compression Data Pump Compression (COMPRESSION=METADATA_ONLY does not require the Advanced Compression option) Multiple RMAN Compression Levels (RMAN DEFAULT COMPRESS does not require the Advanced Compression option) OLTP Table Compression SecureFiles Compression and Deduplication Ez alapján RMAN esetében például a default compression (BZIP2) szint ingyen használható, viszont az új ZLIB Advanced Compression opciót igényel. A ZLIB hatékonyabban használja a CPU-t, azaz jóval gyorsabb, viszont kisebb tömörítési arány érheto el vele.

    Read the article

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • Handling Indirection and keeping layers of method calls, objects, and even xml files straight

    - by Cervo
    How do you keep everything straight as you trace deeply into a piece of software through multiple method calls, object constructors, object factories, and even spring wiring. I find that 4 or 5 method calls are easy to keep in my head, but once you are going to 8 or 9 calls deep it gets hard to keep track of everything. Are there strategies for keeping everything straight? In particular, I might be looking for how to do task x, but then as I trace down (or up) I lose track of that goal, or I find multiple layers need changes, but then I lose track of which changes as I trace all the way down. Or I have tentative plans that I find out are not valid but then during the tracing I forget that the plan is invalid and try to consider the same plan all over again killing time.... Is there software that might be able to help out? grep and even eclipse can help me to do the actual tracing from a call to the definition but I'm more worried about keeping track of everything including the de-facto plan for what has to change (which might vary as you go down/up and realize the prior plan was poor). In the past I have dealt with a few big methods that you trace and pretty much can figure out what is going on within a few calls. But now there are dozens of really tiny methods, many just a single call to another method/constructor and it is hard to keep track of them all.

    Read the article

  • JDK 7 Feature Complete Milestone Reached

    - by Henrik Ståhl
    The JDK 7 project has reached Feature Complete (FC). This means that development and QA have finished all planned feature and test development work in the release and are moving the focus to testing and bug fixing on all supported JDK 7 platforms. This is a major step towards JDK 7 General Availability (GA) and implies that we are tracking close to the plan published on openjdk.java.net. (The original plan was FC on 12/16. We hit this less than a week late, but verifying that everything was done in time took a couple of weeks due to the intervening holidays.) The definition of the FC milestone allows for exceptions to be integrated later. There are very few such exceptions in the project, the most prominent being updated JAXP/JAXB/JAX-WS and integration of the enhanced JMX agent from JRockit. Our project management does not expect the exceptions to have any negative impact on the release plan. The project may still be delayed if the Expert Groups for the JSRs included in Java SE 7 (203, 292, 334, 336) decide to introduce changes which cannot be accomodated within the existing schedule. Apart from that caveat, Oracle remains confident with the published plan.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Join Us!! Live Webinar: Using UPK for Testing

    - by Di Seghposs
    Create Manual Test Scripts 50% Faster with Oracle User Productivity Kit  Thursday, March 29, 2012 11:00 am – 12:00 pm ET Click here to register now for this informative webinar. Oracle UPK enhances the testing phase of the implementation lifecycle by reducing test plan creation time, improving accuracy, and providing the foundation for reusable training documentation, application simulations, and end-user performance support—all critical assets to support an enterprise application implementation. With Oracle UPK: Reduce manual test plan development time - Accelerate the testing cycle by significantly reducing the time required to create the test plan. Improve test plan accuracy - Capture test steps automatically using Oracle UPK and import those steps directly to any of these testing suites eliminating many of the errors that occur when writing manual tests. Create the foundation for reusable assets - Recorded simulations can be used for other lifecycle phases of the project, such as knowledge transfer for training and support. With its integration to Oracle Application Testing Suite, IBM Rational, and HP Quality Center, Oracle UPK allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing  issues. Join this live webinar and learn how to decrease your time to deployment and enhance your testing plans today! 

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >