Search Results

Search found 2103 results on 85 pages for 'jane sales'.

Page 19/85 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Gartner: Android leapfrogs Linux and Windows Mobile

    <b>LinuxDevices:</b> "Android has overtaken Windows Mobile and Linux for fourth place in smartphone OS market share with 9.6 percent, says Gartner. The worldwide study of first quarter smartphone sales showed a 707 per cent year-on-year increase in Android sales..."

    Read the article

  • SEO Content Writing - A Flourishing Industry

    SEO Content writers are in huge demand these days and the reason for this is the increasing amount of sales that are generated through online sales. The need for original content that can be marketed to the customers will remain because such content not only helps to increase conversions but also help to attract customers through the various search engines. You might find that certain pages rank a lot better than other just due to the kind of content that is present on it.

    Read the article

  • Listen To The Oracle Xsigo Webcast Replays

    - by Cinzia Mascanzoni
    For product strategy, sales plays, steps to resell, sales benefits and resources listen to the webcast replays: Xsigo Systems VAD Update: Understanding the Xsigo Channel Model & Product Strategies (November 13, 2012) Replay Xsigo Systems Partner Update: Get Ready to Sell Xsigo Systems Products With Oracle (November 15, 2012) Replay

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • SQL Server, how to join a table in a "rotated" format (returning columns instead of rows)?

    - by Joshua Carmody
    Sorry for the lame title, my descriptive skills are poor today. In a nutshell, I have a query similar to the following: SELECT P.LAST_NAME, P.FIRST_NAME, D.DEMO_GROUP FROM PERSON P JOIN PERSON_DEMOGRAPHIC PD ON PD.PERSON_ID = P.PERSON_ID JOIN DEMOGRAPHIC D ON D.DEMOGRAPHIC_ID = PD.DEMOGRAPHIC_ID This returns output like this: LAST_NAME FIRST_NAME DEMO_GROUP --------------------------------------------- Johnson Bob Male Smith Jane Female Smith Jane Teacher Beeblebrox Zaphod Male Beeblebrox Zaphod Alien Beeblebrox Zaphid Politician I would prefer the output be similar to the following: LAST_NAME FIRST_NAME Male Female Teacher Alien Politician --------------------------------------------------------------------------------------------------------- Johnson Bob 1 0 0 0 0 Smith Jane 0 1 1 0 0 Beeblebrox Zaphod 1 0 0 1 1 The number of rows in the DEMOGRAPHIC table varies, so I can't say with certainty how many columns I need. The query needs to be flexible. Yes, it would be trivial to do this in code. But this query is one piece of a complicated set of stored procedures, views, and reporting services, many of which are outside my sphere of influence. I need to produce this output inside the database to avoid breaking the system. Any ideas? This is MS SQL Server 2005, by the way. Thanks.

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • How can I get a distinct list of elements in a hierarchical query?

    - by RenderIn
    I have a database table, with people identified by a name, a job and a city. I have a second table that contains a hierarchical representation of every job in the company in every city. Suppose I have 3 people in the people table: [name(PK),title,city] Jim, Salesman, Houston Jane, Associate Marketer, Chicago Bill, Cashier, New York And I have thousands of job type/location combinations in the job table, a sample of which follow. You can see the hierarchical relationship since parent_title is a foreign key to title: [title,city,pay,parent_title] Salesman, Houston, $50000, CEO Cashier, Houston, $25000 CEO, USA, $1000000 Associate Marketer, Chicago, $75000 Senior Marketer, Chicago, $125000 ..... The problem I'm having is that my Person table is a composite key, so I don't know how to structure the start with part of my query so that it starts with each of the three jobs in the cities I specified. I can execute three separate queries to get what I want, but this doesn't scale well. e.g.: select * from jobs start with city = (select city from people where name = 'Bill') and title = (select title from people where name = 'Bill') connect by prior parent_title = title UNION select * from jobs start with city = (select city from people where name = 'Jim') and title = (select title from people where name = 'Jim') connect by prior parent_title = title UNION select * from jobs start with city = (select city from people where name = 'Jane') and title = (select title from people where name = 'Jane') connect by prior parent_title = title How else can I get a distinct list (or I could wrap it with a distinct if not possible) of all the jobs which are above the three people I specified?

    Read the article

  • sqlite3 JOIN, GROUP_CONCAT using distinct with custom separator

    - by aiwilliams
    Given a table of "events" where each event may be associated with zero or more "speakers" and zero or more "terms", those records associated with the events through join tables, I need to produce a table of all events with a column in each row which represents the list of "speaker_names" and "term_names" associated with each event. However, when I run my query, I have duplication in the speaker_names and term_names values, since the join tables produce a row per association for each of the speakers and terms of the events: 1|Soccer|Bobby|Ball 2|Baseball|Bobby - Bobby - Bobby|Ball - Bat - Helmets 3|Football|Bobby - Jane - Bobby - Jane|Ball - Ball - Helmets - Helmets The group_concat aggregate function has the ability to use 'distinct', which removes the duplication, though sadly it does not support that alongside the custom separator, which I really need. I am left with these results: 1|Soccer|Bobby|Ball 2|Baseball|Bobby|Ball,Bat,Helmets 3|Football|Bobby,Jane|Ball,Helmets My question is this: Is there a way I can form the query or change the data structures in order to get my desired results? Keep in mind this is a sqlite3 query I need, and I cannot add custom C aggregate functions, as this is for an Android deployment. I have created a gist which makes it easy for you to test a possible solution: https://gist.github.com/4072840

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • Formatting the parent and child nodes of a Treeview that is populated by a XML file

    - by Marina
    Hello Everyone, I'm very new to xml so I hope I'm not asking any silly question here. I'm currently working on populating a treeview from an XML file that is not hierarchically structured. In the xml file that I was given the child and parent nodes are defined within the attributes of the item element. How would I be able to utilize the attributes in order for the treeview to populate in the right hierarchical order. (Example Mary Jane should be a child node of Peter Smith). At present all names are under one another. root <item parent_id="0" id="1"><content><name>Peter Smith</name></content></item> <item parent_id="1" id="2"><content><name>Mary Jane</name></content></item> <item parent_id="1" id="7"><content><name>Lucy Lu</name></content></item> <item parent_id="2" id="3"><content><name>Informatics Team</name></content></item> <item parent_id="3" id="4"><content><name>Sandy Chu</name></content></item> <item parent_id="4" id="5"><content><name>John Smith</name></content></item> <item parent_id="5" id="6"><content><name>Jane Smith</name></content></item> /root Thank you for all of your help, Marina

    Read the article

  • Updating a specific key/value inside of an array field with MongoDB

    - by Jesta
    As a preface, I've been working with MongoDB for about a week now, so this may turn out to be a pretty simple answer. I have data already stored in my collection, we will call this collection content, as it contains articles, news, etc. Each of these articles contains another array called author which has all of the author's information (Address, Phone, Title, etc). The Goal - I am trying to create a query that will update the author's address on every article that the specific author exists in, and only the specified author block (not others that exist within the array). Sort of a "Global Update" to a specific article that affects his/her information on every piece of content that exists. Here is an example of what the content with the author looks like. { "_id" : ObjectId("4c1a5a948ead0e4d09010000"), "authors" : [ { "user_id" : null, "slug" : "joe-somebody", "display_name" : "Joe Somebody", "display_title" : "Contributing Writer", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, { "user_id" : null, "slug" : "jane-somebody", "display_name" : "Jane Somebody", "display_title" : "Editor", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, ], "tags" : [ "tag1", "tag2", "tag3" ], "title" : "Title of the Article" } I can find every article that this author has created by running the following command: db.content.find({authors: {$elemMatch: {slug: 'joe-somebody'}}}); So theoretically I should be able to update the authors record for the slug joe-somebody but not jane-somebody (the 2nd author), I am just unsure exactly how you reach in and update every record for that author. I thought I was on the right track, and here's what I've tried. b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {address: '1234 Avenue Rd.'} }, false, true ); I just believe there's something I am missing in the $set statement to specify the correct author and point inside of the correct array. Any ideas? **Update** I've also tried this now: b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {'authors.$.address': '1234 Avenue Rd.'} }, false, true );

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Oracle apresenta resultados do ano

    - by pfolgado
    A Oracle acabou de apresentar os resultados do 4º trimestre e do ano fiscal FY11. Os resultados mais relevantes são: Receitas de Vendas cresceram 33%, atingindo um total de 35,6 mil milhões de dólares Vendas de Novas licenças cresceram 23% Receitas de Hardware de 4,4 mil milhões de dólares Resultados operacionais cresceram 39% Resultados por acção de cresceram 38% para 1,67 dólares “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” Oracle Reports Q4 GAAP EPS Up 34% To 62 Cents; Q4 NON-GAAP EPS Up 25% To 75 Cents Q4 Software New License Sales Up 19%, Q4 Total Revenue Up 13% Oracle today announced fiscal 2011 Q4 GAAP total revenues were up 13% to $10.8 billion, while non-GAAP total revenues were up 12% to $10.8 billion. Both GAAP and non-GAAP new software license revenues were up 19% to $3.7 billion. Both GAAP and non-GAAP software license updates and product support revenues were up 15% to $4.0 billion. Both GAAP and non-GAAP hardware systems products revenues were down 6% to $1.2 billion. GAAP operating income was up 32% to $4.4 billion, and GAAP operating margin was 40%. Non-GAAP operating income was up 19% to $5.2 billion, and non-GAAP operating margin was 48%. GAAP net income was up 36% to $3.2 billion, while non-GAAP net income was up 27% to $3.9 billion. GAAP earnings per share were $0.62, up 34% compared to last year while non-GAAP earnings per share were up 25% to $0.75. GAAP operating cash flow on a trailing twelve-month basis was $11.2 billion. For fiscal year 2011, GAAP total revenues were up 33% to $35.6 billion, while non-GAAP total revenues were up 33% to $35.9 billion. Both GAAP and non-GAAP new software license revenues were up 23% to $9.2 billion. GAAP software license updates and product support revenues were up 13% to $14.8 billion, while non-GAAP software license updates and product support revenues were up 13% to $14.9 billion. Both GAAP and non-GAAP hardware systems products revenues were $4.4 billion. GAAP operating income was up 33% to $12.0 billion, and GAAP operating margin was 34%. Non-GAAP operating income was up 27% to $15.9 billion, and non-GAAP operating margin was 44%. GAAP net income was up 39% to $8.5 billion, while non-GAAP net income was up 34% to $11.4 billion. GAAP earnings per share were $1.67, up 38% compared to last year while non-GAAP earnings per share were up 33% to $2.22. “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” In addition, Oracle also announced that its Board of Directors declared a quarterly cash dividend of $0.06 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 13, 2011, with a payment date of August 3, 2011.

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Vendors: Partners or Salespeople?

    - by BuckWoody
    I got a great e-mail from a friend that asked about how he could foster a better relationship with his vendors. So many times when you work with a vendor it’s more of a used-car sales experience than a partnership – but you can actually make your vendor more of a partner, as long as you both set some ground-rules at the start. Sit down with your vendor, and have a heart-to-heart talk with them, explain that they won’t win every time, but that you’re willing to work with them in an honest way on both sides. Here’s the advice I sent him verbatim. I hope this post generates lots of comments from both customers and vendors. I don’t expect that you’ve had a great experience with your Microsoft reps, but I happen to work with some of the best sales teams in the business, and our clients tell us that all the time. “The key to this relationship is to keep the audience really small. Ideally there should be one person from your side that is responsible for the relationship, and one from the vendor’s side. Each responsible person should have the authority to make decisions, and to bring in other folks as needed for a given topic, project or decision.   For Microsoft, this is called an “Account Manager” – they aren’t technical, they aren’t sales. They “own” a relationship with a company. They learn what the company does, who does it, and how. They are responsible to understand what the challenges in your company are. While they don’t know the bits and bytes of everything we sell, they know what each thing does, and who to talk to about it. I get a call from an Account Manager every week that has pre-digested an issue at an organization and says to me: “I need you to set up an architectural meeting with their technical staff to get a better read on how we can help with problem X.” I do that and then report back to the Account Manager what we learned.  All through this process there’s the atmosphere of a “team”, not a “sales opportunity” per se. I’ve even recommended that the firm use a rival product, and I’ve never gotten push-back on that decision from my Account Managers.   But that brings up an interesting point. Someone pays an Account Manager and pays me. They expect something in return. At some point, you have to buy something. Not every time, not every situation – sometimes it’s just helping you with what you already bought from us. But the point is that you can’t expect lots of love and never spend any money. That’s the way business works.   Finally, don’t view the vendor as someone with their hand in your pocket – somebody that’s just trying to sell you something and doesn’t care if they ever see you again – unless they deserve it. There are plenty of “love them and leave them” companies out there, and you may have even had this experience with us, but that isn’t the case in the firms I work with. In fact, my customers get a questionnaire that asks them that exact question. “How many times have you seen your account team? Did you like your interaction with them? Can they do better?” My raises, performance reviews and general standing in my group are based on the answers the company gives.  Ask your vendor if they measure their sales and support teams this way – if not, seek another vendor to partner with.   Partnering with someone is a big deal. It involves time and effort on your part, and on the vendor’s part. If either of you isn’t pulling your weight, it just won’t work. You have every right to expect them to treat you as a partner, and they have the same right for your side.” Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Using OpenCV in QTCreator (linking problem)

    - by Jane
    Greetings! I have a problem with the linking simpliest test program in QTCreator: CODE: #include <QtCore/QCoreApplication> #include <cv.h> #include<highgui.h> #include <cxcore.hpp> using namespace cv; int _tmain(int argc, _TCHAR* argv[]) { cv::Mat M(7,7,CV_32FC2,Scalar(1,3)); return 0; } .pro file: QT -= gui TARGET = testopencv CONFIG += console CONFIG -= app_bundle INCLUDEPATH += C:/OpenCV2_1/include/opencv TEMPLATE = app LIBS += C:/OpenCV2_1/lib/cxcore210d.lib \ C:/OpenCV2_1/lib/cv210d.lib \ C:/OpenCV2_1/lib/highgui210d.lib\ C:/OpenCV2_1/lib/cvaux210d.lib SOURCES += main.cpp I've tried to use -L and -l like LIBS+= -LC:/OpenCV2_1/lib -lcxcored ang .pri file QMAKE_LIBDIR += C:/OpenCV2_1/lib/Debug LIBS += -lcxcore210d \ -lcv210d \ -lhighgui210d The errors are like debug/main.o:C:\griskin\test\app\testopencv/../../../../OpenCV2_1/include/opencv/cxcore.hpp:97: undefined reference to cv::format(char const*, ...)' Could anyone help me? Thanks! In Visual Studio it works but I need it works in QTCreator..

    Read the article

  • Can I turn off context menu scrolling in VS2010?

    - by Jane McDowell
    When I right-click in the middle of a code editor window in Visual Studio 2010 RTM, a context menu appears. This takes up about a fourth the height of the screen but doesn't show all options. Instead it scrolls up and down when you move the pointer to the top or bottom of the menu. If I click near the top or bottom of the screen, the menu is normal and doesn't scroll. Can I turn this behavior off? It's stupid. You can't even scroll using the mouse wheel. EDIT I reckon this might just be a bug - I've found a few.

    Read the article

  • Dojo: dojo onblur events

    - by Jane Wilkie
    Hi guys, I have a form setup with dojo 1.5. I am using a dijit.form.ComboBox and a dijit.form.TextBox The Combobox has values like "car","bike","motorcycle" and the textbox is meant to be an adjective to the Combobox. So it doesn't matter what is in the Combobox but if the ComboBox does have a value then something MUST be filled in the TextBox. Optionally, if nothing is in the ComboBox, then nothing can be in the TextBox and that is just fine. In fact if something isn't in the Combobox then nothing MUST be in the text box. In regular coding I would just use an onBlur event on the text box to go to a function that checks to see if the ComboBox has a value. I see in dojo that this doesn't work... Code example is below... Vehicle: <input dojoType="dijit.form.ComboBox" store="xvarStore" value="" searchAttr="name" name="vehicle_1" id="vehicle_1" /> Descriptor: <input type="text" dojoType="dijit.form.TextBox" value="" class=lighttext style="width:350px;height:19px" id="filter_value_1" name="filter_value_1" /> My initial attempt was to add an onBlur within the Descriptor's <input> tag but discovered that that doesn't work. How does Dojo handle this? Is it via a dojo.connect parameter? Even though in the example above the combobox has an id of "vehicle_1" and the text box has an id of "filter_value_1", there can be numerous comboboxes and textboxes numbering sequentially upward. (vehicle_2, vehicle_3, etc) Any advice or links to resources would be greatly appreciated. Janie

    Read the article

  • sscanf + c99 not working on some platforms ?

    - by Jane
    When I compile a simple Hello World! program that uses the sscanf function on my local Debian lenny x64, it works. But when I upload the same program to the server running CentOS x86, it will not work. If I do not use sscanf, then the program works on both computers. gcc -std=c99 -O2 -pipe -m32 If I compile it with sscanf but without -std=c99, then it works on both computers. gcc -O2 -pipe -m32 What is the problem with sscanf and c99 on CentOS x86 ? I thought that compiling with the -m32 flag would work on all Linuxes ? (I have limited access to the CentOS server, so I do not have access to error messages.)

    Read the article

  • XML Schema: xs:any processcontent="skip" but still returns error

    - by Jane Doe
    I wanted to embed HTML formatting and so I did <xs:element name="boobie"> <xs:complexType mixed="true"> <xs:sequence> <xs:any namespace="http://www.w3.org/1999/xhtml" minOccurs="0" maxOccurs="unbounded" processContent="skip"/> </xs:sequence> </xs:complexType> </xs:element> However, when I put li tag (dot point element for HTML) inside the XML file (inside boobie tag) it generates error that it is unexpected. What is wrong with this? is the only way to put html tag inside XMl file is to use CDATA?

    Read the article

  • getnameinfo prototype asks for sockaddr not sockaddr_in ?

    - by Jane
    The getnameinfo prototype asks for sockaddr but I have only seen examples using sockaddr_in. Can this example be re-written for sockaddr ? sin_family becomes sa_family but what about sin_port and sin_addr ? How are they included in sa_data ? struct sockaddr{ unsigned short sa_family; char sa_data[14]; }; struct sockaddr_in{ short sin_family; unsigned short sin_port; struct in_addr sin_addr; char sin_zero[8]; }; struct sockaddr_in sin; memset(&sin, 0, sizeof(sin)); sin.sin_family = AF_INET; sin.sin_addr.s_addr = inet_addr(IPvar); sin.sin_port = 0; // If 0, port is chosen by system getnameinfo( (struct sockaddr *)&sin, sizeof(sin), buffervar, sizeof(buffervar), NULL, 0, 0);

    Read the article

  • Are fopen/fread/fgets PID-safe in C ?

    - by Jane
    Various users are browsing through a website 100% programmed in C (CGI). Each webpage uses fopen/fgets/fread to read common data (like navigation bars) from files. Would each call to fopen/fgets/fread interefere with each other if various people are browsing the same page ? If so, how can this be solved in C ? (This is a Linux server, compiling is done with gcc and this is for a CGI website programmed in C.) Example: FILE *DATAFILE = fopen(PATH, "r"); if ( DATAFILE != NULL ) { while ( fgets( LINE, BUFFER, DATAFILE ) ) { /* do something */ } }

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >