Search Results

Search found 27001 results on 1081 pages for 'michael palmeter (engineered systems product management)'.

Page 165/1081 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Count problem in SQL when I want results from diffrent tabels

    - by Nicklas
    ALTER PROCEDURE GetProducts @CategoryID INT AS SELECT COUNT(tblReview.GroupID) AS ReviewCount, COUNT(tblComment.GroupID) AS CommentCount, Product.GroupID, MAX(Product.ProductID) AS ProductID, AVG(Product.Price) AS Price, MAX (Product.Year) AS Year, MAX (Product.Name) AS Name, AVG(tblReview.Grade) AS Grade FROM tblReview, tblComment, Product WHERE (Product.CategoryID = @CategoryID) GROUP BY Product.GroupID HAVING COUNT(distinct Product.GroupID) = 1 This is what the tabels look like: **Product** |**tblReview** | **tblComment** ProductID | ReviewID | CommentID Name | Description | Description Year | GroupID | GroupID Price | Grade | GroupID GroupID is name_year of a Product, ex Nike_2010. One product can have diffrent sizes for exampel: ProductID | Name | Year | Price | Size | GroupID 1 | Nike | 2010 | 50 | 8 | Nike_2010 2 | Nike | 2010 | 50 | 9 | Nike_2010 3 | Nike | 2010 | 50 | 10 | Nike_2010 4 | Adidas| 2009 | 45 | 8 | Adidas_2009 5 | Adidas| 2009 | 45 | 9 | Adidas_2009 6 | Adidas| 2009 | 45 | 10 | Adidas_2009 I dont get the right count in my tblReview and tblComment. If I add a review to Nike size 8 and I add one review to Nike size 10 I want 2 count results when I list the products with diffrent GroupID. Now I get the same count on Reviews and Comment and both are wrong. I use a datalist to show all the products with diffrent/unique GroupID, I want it to be like this: ______________ | | | Name: Nike | | Year: 2010 | | (All Sizes) | | x Reviews | | x Comments | | x AVG Grade | |______________| All Reviewcounts, Commentcounts and the Average of all products with the same GroupID, the Average works great.

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • PUT-ing a form to update a row, but I can't find the id. Where is it?

    - by montooner
    How should I be passing in the ID? Error: Couldn't find Product without an ID Form: <% form_for :product, @product, :url => { :action => :update } do |f| %> <%= f.error_messages %> <p> <%= f.label :names %><br /> <%= f.text_field :names %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> Controller (for /products/edit/1 view): def edit @product = Product.find(params[:id]) end Controller (to change the db): def update @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) format.html { redirect_to(@product, :notice => 'Product was successfully updated.') } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end

    Read the article

  • Distinct() to return List<> returning Duplicates

    - by KDM
    I have a list of Filters that are passed into a webservice and I iterate over the collection and do Linq query and then add to the list of products but when I do a GroupBy and Distinct() it doesn't remove the duplicates. I am using a IEnumerable because when you use Disinct it converts it to IEnumerable. If you know how to construct this better and make my function return a type of List<Product> that would be appreciated thanks. Here is my code in C#: if (Tab == "All-Items") { List<Product> temp = new List<Product>(); List<Product> Products2 = new List<Product>(); foreach (Filter filter in Filters) { List<Product> products = (from p in db.Products where p.Discontinued == false && p.DepartmentId == qDepartment.Id join f in db.Filters on p.Id equals f.ProductId join x in db.ProductImages on p.Id equals x.ProductId where x.Dimension == "180X180" && f.Name == filter.Name /*Filter*/ select new Product { Id = p.Id, Title = p.Title, ShortDescription = p.ShortDescription, Brand = p.Brand, Model = p.Model, Image = x.Path, FriendlyUrl = p.FriendlyUrl, SellPrice = p.SellPrice, DiscountPercentage = p.DiscountPercentage, Votes = p.Votes, TotalRating = p.TotalRating }).ToList<Product>(); foreach (Product p in products) { temp.Add(p); } IEnumerable temp2 = temp.GroupBy(x => x.Id).Distinct(); IEnumerator e = temp.GetEnumerator(); while (e.MoveNext()) { Product c = e.Current as Product; Products2.Add(c); } } pf.Products = Products2;// return type must be List<Product> }

    Read the article

  • How to properly manage a complex DB structure?

    - by errr
    Let's say you have several systems using the same DB - each uses several schemes (sometimes same as the other). This structure of these schemes is somewhat very big and complicated. Now, how could you possibly manage such scheme structure? Obviously using some sort of "configuration" - the simplest would be SQL scripts, but a more reasonable solution would be XMLs which can be easily converted into SQL, or some other readable solution (for example, JPA's XMLs or Annotations). This solution though, causes a problem where you can't really tell if your configuration matches the structure of the DB schemes exactly. You can't say if those two are synchronized. Why wouldn't they? Well, in such big structure there are going to be many changes, and you won't always remember to save/commit your configuration after you've altered the schemes, or maybe you did save/commit it, but eventually didn't altered anything in the schemes and forgot to undo the changes to the configuration. More than that, another problem (not caused by the configuration, but isn't addressed by it either) is versioning. I don't see any good way of managing the DB schemes versions (say our last alteration makes 3 systems crash - not good, how to "rollback"?). And thoughts? thx.

    Read the article

  • $PATH issues with OSX Lion

    - by Mikey
    I'm having some issues with running mysql from terminal: macmini:~ michael$ which mysql /Applications/XAMPP/xamppfiles/bin/mysql macmini:~ michael$ mysql -bash: /usr/local/mysql/bin/mysql: No such file or directory I had a previous installation at /usr/local/mysql/bin/mysql which no longer exists. My path variable is as follows: macmini:~ michael$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/Applications/XAMPP/xamppfiles/bin/:/usr/local/bin:/usr/X11/bin:/usr/local/MacGPG2/bin:/usr/texbin Dropping to root seems to function correctly: macmini:~ michael$ sudo bash Password: bash-3.2# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 66 Server version: 5.1.44 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> I seem to have found the issue - but I'm not sure how to change or remove this alias macmini:~ michael$ type -a mysql mysql is aliased to `/usr/local/mysql/bin/mysql' mysql is /Applications/XAMPP/xamppfiles/bin/mysql mysql is /Applications/XAMPP/xamppfiles/bin/mysql

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • How do I configure postfix starttls

    - by Michael Temeschinko
    I need to install postfix on my webserver couse I need to use sendmail for my website. I only need to send mail not recieve or relay. send with starttls (port 587) via relay smtp.strato.de here is what happens Jul 15 00:02:38 negrita postfix/smtp[7120]: Host offered STARTTLS: [smtp.strato.de] Jul 15 00:02:38 negrita postfix/smtp[7120]: C717A181252: to=<[email protected]>, relay=smtp.strato.de[81.169.145.133]:587, delay=0.31, delays=0.09/0/0.16/0.04, dsn=5.7.0, status=bounced (host smtp.strato.de[81.169.145.133] said: 530 5.7.0 Bitte konfigurieren Sie ihr E-Mailprogramm fuer Authentifizierung am SMTP Server, wie auf www.strato.de/email-hilfe beschrieben. - Please configure your mail client for using SMTP Server Authentication (in reply to MAIL FROM command)) Jul 15 00:02:38 negrita postfix/cleanup[7118]: 29F5F181254: message-id=<20120714220238.29F5F181254@negrita> Jul 15 00:02:38 negrita postfix/qmgr[7102]: 29F5F181254: from=<>, size=2548, nrcpt=1 (queue active) Jul 15 00:02:38 negrita postfix/bounce[7121]: C717A181252: sender non-delivery notification: 29F5F181254 Jul 15 00:02:38 negrita postfix/qmgr[7102]: C717A181252: removed Jul 15 00:02:39 negrita postfix/local[7122]: 29F5F181254: to=<michael@negrita>, relay=local, delay=1.1, delays=0.04/0/0/1.1, dsn=2.0.0, status=sent (delivered to command: procmail -a "$EXTENSION") Jul 15 00:02:39 negrita postfix/qmgr[7102]: 29F5F181254: removed Jul 15 08:05:18 negrita postfix/master[1083]: daemon started -- version 2.9.1, configuration /etc/postfix Jul 15 08:05:29 negrita postfix/master[1083]: reload -- version 2.9.1, configuration /etc/postfix and my config michael@negrita:~$ postconf -n biff = no config_directory = /etc/postfix delay_warning_time = 4h home_mailbox = /home/michael/Maildir/ html_directory = /usr/share/doc/postfix/html inet_interfaces = localhost mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydomain = example.com myhostname = negrita mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 notify_classes = resource, software, protocol readme_directory = /usr/share/doc/postfix recipient_delimiter = + relayhost = [smtp.strato.de]:587 smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd smtp_tls_enforce_peername = no smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes soft_bounce = yes the user and password is ok couse I can send mail with my thunderbird thanks in advance mike

    Read the article

  • What do the 4 keyboard input method systems in 10.04 mean?

    - by Android Eve
    I am trying to install another language support (in addition to the default US). Checking that language checkbox in "Install / Remove Languages..." wasn't too difficult. :) But now I want to add keyboard support, too, for that language. Again, I am prompted with a nice listbox with the following 4 options: none ibus lo-gtk th-gtk But I have no idea what these mean. I googled "ubuntu 10.04 keyboard input method system none ibus lo-gtk th-gtk" but all I could find was descriptions of problems, not an actual definition. Could you please point me to a webpage where I can learn about the meanings of these 4 different methods and +'s and -'s of each?

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Silverlight Cream for November 20, 2011 -- #1169

    - by Dave Campbell
    In this Issue: Andrea Boschin, Michael Crump, Michael Sync, WindowsPhoneGeek, Jesse Liberty, Derik Whittaker, Sumit Dutta, Jeff Blankenburg(-2-), and Beth Massi. Above the Fold: WP7: "Silver VNC 1.0 for Windows Phone "Mango"" Andrea Boschin Metro/WinRT/W8: "Lighting up your C# Metro apps by being a Share Source" Derik Whittaker LightSwitch: "Using the Save and Query Pipeline to “Archive” Deleted Records" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Silver VNC 1.0 for Windows Phone "Mango" Andrea Boschin published the first release of his "Silver VNC" version 1.0 on CodePlex. Check out the video on the blog post to see the capabilities, then go grab it from CodePlex. Fixing a broken toolbox (In Visual Studio 2010 SP1) Not Silverlight or Metro, but near to us all is Visual Studio... read how Michael Crump resolves the 'broken' toolbox that we all get now and then Windows Phone 7 – USB Device Not Recognized Error Michael Sync is looking for ideas about an error he gets any time he updates his phone. Windows Phone Toolkit MultiselectList in depth| Part2: Data Binding WindowsPhoneGeek has up the second part of his tutorial series on the MultiselectList from the Windows Phone Toolkit... this part is about data binding, complete with lots of code, discussion, pictures, and project to download New Mini-Tutorial Video Series Jesse Liberty started a new video series based on his Mango Mini tutorials. They will be on Channel 9, and he has a link on this post to the index. The firs of the series is on animation without code Lighting up your C# Metro apps by being a Share Source Derik Whittaker continues investigating Metro with this post about how to set your app up to share its content with other apps Part 21 - Windows Phone 7 - Toast Push Notification Sumit Dutta has part 21 of his WP7 series up and is talking about Toast Notification by creating a Windows form app for sending notifications to the WP7 app for viewing 31 Days of Mango | Day #6: Motion Jeff Blankenburg's Day 6 in his Mango series is about the Motion class which combines the data we get from the Accelerometer, Compass, and Gyroscope of the last couple days of posts 31 Days of Mango | Day #7: Raw Camera Data In Day 7, Jeff Blankenburg talks about the Camera on the WP7 and how to use the raw data in your own application Using the Save and Query Pipeline to “Archive” Deleted Records Beth Massi's latest LightSwith post is this one on tapping into the Save and Query pipelines to perform some data processing prior to saving or pulling data Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for November 25, 2011 -- #1174

    - by Dave Campbell
    In this Issue: Michael Collier, Samidip Basu, Jesse Liberty, Dhananjay Kumar, and Michael Crump. Above the Fold: WP7: "31 Days of Mango | Day #16: Isolated Storage Explorer" Samidip Basu Metro/WinRT/W8: "1360x768x32 Resolution in Windows 8 in VirtualBox" Michael Crump Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Alex Golesh releases a Silverlight 5-friendly version of his external map manifest file tool: Utility: Extmap Maker v1.1From SilverlightCream.com:31 Days of Mango | Day #17: Using Windows AzureMichael Collier has Jeff Blankenburg's Day 17 and is talking about Azure services for your Phone apps... great discussion on this... good diagrams, code, and entire project to download31 Days of Mango | Day #16: Isolated Storage ExplorerSamidip Basu has Jeff Blankenburg's 31 Days for Day 16, and is discussing ISO, and the Isolated Storage Explorer which helps peruse ISO either in the emulator or on your deviceTest Driven Development–Testing Private ValuesJesse Liberty's got a post up discussing TDD in his latest Full Stack excerpt wherein he and Jon Galloway are building a Pomodoro timer app. He has a solution for dealing with private member variables and is looking for feedbackVideo on How to work with System Tray Progress Indicator in Windows Phone 7Dhananjay Kumar's latest video tutorial is up... covering working with the System Tray Progress Indicator in WP7, as the title says :)1360x768x32 Resolution in Windows 8 in VirtualBoxMichael Crump is using a non-standard resolution with Win8 preview and demosntrates how to make that all work with VirtualBoxMichaelStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Details on Oracle's Primavera P6 Reporting Database R2

    - by mark.kromer
    Below is a graphic screenshot of our detailed announcement for the new Oracle data warehouse product for Primavera P6 called P6 Reporting Database R2. This DW product includes the ETL, data warehouse star schemas and ODS that you'll need to build an enterprise reporting solution for your projects & portfolios. This product is included on a restricted license basis with the new Primavera P6 Analytics R1 product from Oracle because those Analytics are built in OBIEE based on this data warehouse product.

    Read the article

  • Getting developers and support to work together

    - by Matt Watson
    Agile development has ushered in the norm of rapid iterations and change within products. One of the biggest challenges for agile development is educating the rest of the company. At my last company our biggest challenge was trying to continually train 100 employees in our customer support and training departments. It's easy to write release notes and email them to everyone. But for complex software products, release notes are not usually enough detail. You really have to educate your employees on the WHO, WHAT, WHERE, WHY, WHEN of every item. If you don't do this, you end up with customer service people who know less about your product than your users do. Ever call a company and feel like you know more about their product than their customer service people do? Yeah. I'm talking about that problem.WHO does the change effect?WHAT was the actual change?WHERE do I find the change in the product?WHY was the change made? (It's hard to support something if you don't know why it was done.)WHEN will the change be released?One thing I want to stress is the importance of the WHY something was done. For customer support people to be really good at their job, they need to understand the product and how people use it. Knowing how to enable a feature is one thing. Knowing why someone would want to enable it, is a whole different thing and the difference in good customer service. Another challenge is getting support people to better test and document potential bugs before escalating them to development. Trying to fix bugs without examples is always fun... NOT. They might as well say "The sky is falling, please fix it!"We need to over train the support staff about product changes and continually stress how they document and test potential product bugs. You also have to train the sales staff and the marketing team. Then there is updating sales materials, your website, product documentation and other items there are always out of date. Every product release causes this vicious circle of trying to educate the rest of the company about the changes.Do we need to record a simple video explaining the changes and email it to everyone? Maybe we should  use a simple online training type app to help with this problem. Ultimately the struggle is taking the time to do the training, but it is time well spent. It may save you a lot of time answering questions and fixing bugs later. How do we efficiently transfer key product knowledge from developers and product owners to the rest of the company? How have you solved these issues at your company?

    Read the article

  • How and why are operating systems bootable from a USB?

    - by user114638
    I'm told to install ubuntu on my laptop for work in order to learn shell scripting. I've read the best way is to install ubuntu on a USB stick and partition my HDD. I'm curious how an OS is bootable from a USB stick? Is it literally just a small interface that can be put anywhere? This reminds me of a time I downloaded a game onto my USB stick, when I brought it to my friends house he told me it will run slow if I don't install it and only run it from the usb, is this different from running ubuntu from a usb? Will ubuntu be slow?

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Why do operating systems do low level stuff in C and C++? Why not just C++?

    - by Cole Johnson
    On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. This confuses me because AFAIK, you can call C++ functions from an extern "C"'d block as C++ is just C with extra features (all of which can be rewritten in C if you wanted to AFAIK). I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C? So my question is: Why would a kernel be written in both C and C++ instead of just C++

    Read the article

  • WebCenter Innovation Award Winners

    - by Michael Snow
    Of course, here on our WebCenter blog – we’d like to highlight and brag about our great WebCenter winners. The 2012 WebCenter Innovation Award Winners University of Louisville Location: Louisville, KY, USA Industry: Higher Education Fusion Middleware Products: WebCenter Portal, WebCenter Content, JDeveloper, WebLogic, Oracle BI, Oracle IdM University of Louisville is a state supported research university Statewide Informatics Network to improve public health The University of Louisville has implemented WebCenter as part of the LOUI (Louisville Informatics Institute) Initiative, a Statewide Informatics Network, which will improve public healthcare and lower cost through the use of novel technology and next generation analytics, decision support and innovative outcomes-based payment systems. ---------- News Limited Country/Region: Australia Industry: News/Media FMW Products: WebCenter Sites Single platform running websites for 50% of Australia's newspapers News Corp is running half of Australia's newspaper websites on this shared platform powered by Oracle WebCenter Sites and have overtaken their nearest competitors and are now leading in terms of monthly page impressions. At peak they have over 250 editors on the system publishing in real-time.Sites include: www.newsspace.com.au, www.news.com.au, www.theaustralian.com.au and many others ------ Life Technologies Corp. Country/Region: Carlsbad, CA, USAIndustry: Life SciencesFMW Products: WebCenter Portal, SOA Suite Life Technologies Corp. is a global biotechnology tools company dedicated to improving the human condition with innovative life science products. They were awarded an innovation award for their solution utilizing WebCenter Portal for remotely monitoring & repairing biotech instruments. They deployed WebCenter as a portal that accesses Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired.  The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers.  The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. ----- China Mobile Jiangsu China Mobile Jiangsu is one of the biggest subsidiaries of China Mobile. It has over 25,000 employees and 40 million mobile subscribers. Country/Region: Jiangsu, China Industry: Telecommunications FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM They were awarded an Innovation Award for their new employee platform powered by WebCenter Portal is designed to serve their 25,000+ employees and help them drive collaboration & productivity. JSMCC (Chian Mobile Jiangsu) Employee Enterprise Portal and Collaboration Platform. It is one of the China Mobile’s most important IT innovation projects. The new platform is designed to serve for JSMCC’s 25000+ employees and to help them improve the working efficiency, changing their traditional working mode to social ways, encouraging employees on business collaboration and innovation. The solution is built on top of Oracle WebCenter Portal Framework and WebCenter Spaces while also leveraging Weblogic Server, UCM, OID, OAM, SES, IRM and Oracle Database 11g. By providing rich collaboration services, knowledge management services, sensitive document protection services, unified user identity management services, unified information search services and personalized information integration capabilities, the working efficiency of JSMCC employees has been greatly improved. Main Functionality : Information portal, office automation integration, personal space, group space, team collaboration with web2.0 services, unified search engine for multiple data sources, document management and protection. SSO for multiple platforms. -------- LADWP – Los Angeles Department for Water and Power Los Angeles Department of Water and Power (LADWP) is the largest public utility company in United States with over 1.6 Million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another city department. Country/Region: US – Los Angeles, CA Industry: Public Utility FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM The new infrastructure consists of: Oracle WebCenter Portal including mobile portal Oracle WebCenter Content for Content Management and Digital Asset Management (DAM) Oracle OAM (IDM, OVD, OAM) integrated with AD for enterprise identity management Oracle Siebel for CRM Oracle DB Oracle SOA Suite for integration of various subsystems and back end systems  The new portal's features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser)

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • How can I refactor client side functionality to create a product line-like generic design?

    - by Nupul
    Assume the following situation similar to that of Stack Overflow: I have a system with a front-end that can perform various manipulations on the data (by sending messages to REST back-end): Posting Editing and deleting Adding labels and tags Now in the first version we created it well modularized but the need as of now for 'evolving' the system similar to Stack Overflow. My question is how best to separate the commonality and how to incorporate the variability with respect to the following: Commonality: The above 'functionalities' and sending/receiving the data from the server Look and feel (also a variability as explained below) HTTP verbs associated with the above actions Variability: The RESTful URLs where the requests are sent The text/style of the UI (the commonality is analogous to Stack Overflow - the functionality of upvotes, posting a question remains the same, but the words, the icons, the look and feel is still different across sites) I think this is entirely a client-side code organization/refactoring issue. I'm heavily using jQuery, javascript and backbone for front-end development. My question is how best should I isolate the same to be able to create multiple such aspects to the tool we are currently working on?

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Product Support Webcast for Existing Customers:Getting the Most from My Oracle Support, Tips and Tricks for WebCenter Content

    - by John Klinke
    My Oracle Support (MOS) is the one-stop support solution for WebCenter customers with Oracle Premier Support. Join us for this 1-hour Advisor Webcast "Getting the Most from My Oracle Support, Tips and Tricks for WebCenter Content" on July 11, 2013 at 11:00am Eastern (16:00 UK / 17:00 CET / 8:00am Pacific / 9:00am Mountain) Topics will include:- My Oracle Support Search, Advanced Search, and PowerViews- Information Centers- Latest Patches and Bundle Patches- My Oracle Support Community- Remote Diagnostic Administration (RDA) Make sure to register and mark this date on your calendar. Register here: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=594341268&t=aOnce your registration request is approved, you will receive a confirmation email with instructions for joining the webcast on July 11. Past Advisor Webcasts have been recorded and can be viewed by going to the 'archived' tabs on this knowledge base announcement:https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1456204.1 (active support contract required)

    Read the article

  • Fix Online Management - Which SEO Mistakes Should Be Eliminated?

    In search engines, websites are very keenly observed before being ranked and it is very obvious if we end up making certain small mistakes. However, nobody is aware about the fact that these small mistakes can cause serious blunders to their websites. The most important requirement for developers is that they should recognize their mistakes, correct them as soon as possible using good online reputation management skills and remove Google results that act against them.

    Read the article

  • It's All In The Cloud

    - by Natalia Rachelson
    People turned out in droves for Steve Miranda's Apps Cloud General Session. Steve, as engaging as ever, covered our Apps strategy in the cloud and reinforced that Oracle has a complete set of cloud services including: •    Human Capital Management•    Talent Management•    Sales and Marketing•    Customer Service and Support•    Financial Management•    Procurement, Sourcing, and Inventory•    Project Portfolio Management•    Governance, Risk, and Compliance... all delivered on top of the Social, Platform, and Common Infrastructure.Steve talked about Fusion being the centerpiece of our Cloud Services. The fact that Fusion is 100 percent standards based is a big, big deal! In addition, our ERP Cloud Service is the most complete cloud service on the market. And email marketing is dead -- social marketing is where the action is. It's also where Oracle is investing heavily from a Sales & Marketing Cloud perspective. Steve covered the strategic acquisitions Oracle has made to enhance our organic Cloud offering. Specifically, Oracle bought RightNow to make our Customer Service and Support Cloud service complete. We also bought Taleo to add Recruiting and Learning capabilities to our Talent Management Cloud. Steve talked about our customers and how they are benefiting from the use of a variety of our Cloud Services. Red Robin is driving lower labor and food costs with Oracle ERP Cloud Service. He used Elizabeth Arden as the profile customer for HCM and Talent Management Service, UBS for HCM and Talent Management Service, and Brocade for Talent Management. All these customers are benefiting from a comprehensive and fully integrated HR platform that aligns compensation with performance and enhances workforce motivation and retention. At the same time, Hitachi Data Systems is using Oracle Taleo Performance Management Cloud to recruit the right competencies, pinpoint areas of improvement, and develop and monitor employee goals to support the global account organization. KLM and Overstock.com are gaining the benefits of Oracle's Customer Service and Support Service from RightNow by better engaging and serving customer needs online and through call centers. And last but not least, Graco and Key Energy are leveraging mobility features and sales forecasting and territory management capabilities within the Oracle Sales and Marketing Service. They expect to gain better visibility to sales information and drive more efficient sales campaigns and empower their sales force with data they need to make sales. Overall, Oracle Apps Cloud Services are enjoying a significant momentum in the marketplace. Steve projected an air of confidence and enthusiasm highlighting Oracle's latest successes with Cloud services.

    Read the article

  • Which E-commerce Platform works well with Flash Product Customization+Social?

    - by Artur
    What's the best platform out there that is flexible enough to easily integrate this: Custom Flash App I would like customers to : 1 - Select a t-shirt from a gallery of artists. 2 - Customize it ( using a Flash tool i created ) 3 - Select a T-shirt size 4 - Order it. All this flash widget does is generate a JPG on the server. the ecommerce app should assign it to that Order/Customer, and add it to their shopping cart. Social Features Customers should also be able to comment on the t-shirts and artist bios. I was thinking of trying Wordpress plugins like Shopp or Getshopped or Cart66. ----- then BuddyPRess for social features. Or is Magento a better choice? thanks!

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >