Search Results

Search found 21417 results on 857 pages for 'release date'.

Page 684/857 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • JDK bug migration milestone: JIRA now the system of record

    - by darcy
    I'm pleased to announce the OpenJDK bug database migration project has reached a significant milestone: the JDK has switched from the legacy Sun "bugtraq" system to a new internal JIRA instance as the system of record for our bug tracking. This completes the initial phase of the previously described plan of getting OpenJDK onto an externally visible and writable bug tracker. The identities contained in the current system include recognized OpenJDK contributors. The bug migration effort to date has been sizable in multiple dimensions. There are around 140,000 distinct issues imported into the JDK project of the JIRA instance, nearly 165,000 if backport issues to track multiple-release information are included. Separately, the Code Tools OpenJDK project has its own JIRA project populated with several thousands existing bugs. Once the OpenJDK JIRA instance is externalized, approved OpenJDK projects will be able to request the creation of a JIRA project for issue tracking. There are many differences in the schema used to model bugs between the legacy bug system and the schema for the new JIRA projects. We've favored simplifications to the existing system where possible and, after much discussion, we've settled on five main states for the OpenJDK JIRA projects: New Open In progress Resolved Closed The Open and In-progress states can have a substate Understanding field set to track whether the issues has its "Cause Known" or "Fix understood". In the closed state, a Verification field can indicate whether a fix has been verified, unverified, or if the fix has failed. At the moment, there will be very little externally visible difference between JIRA for OpenJDK and the legacy system it replaces. One difference is that bug numbers for newly filed issues in the JIRA JDK project will be 8000000 and above. If you are working with JDK Hg repositories, update any local copies of jcheck to the latest version which recognizes this expanded bug range. (The bug numbers of existing issues have been preserved on the import into JIRA). Relatively soon, we plan for the pages published on bugs.sun.com to be generated from information in JIRA rather than in the legacy system. When this occurs, there will be some differences in the page display and the terminology used will be revised to reflect JIRA usage, such as referring to the "component/subcomponent" of an issue rather than its "category". The exact timing of this transition will be announced when it is known. We don't currently have a firm timeline for externalization of the JIRA system. Updates will be provided as they become available. However, that is unlikely to happen before JavaOne next week!

    Read the article

  • Java Embedded Releases

    - by Tori Wieldt
    Oracle today announced a new product in its Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime Optimized for resource-constrained, connected, embedded systems.  Also, Oracle is releasing Oracle Java Wireless Client 3.2, Oracle Java ME Software Development Kit (SDK) 3.2. Oracle also announced Oracle Java Embedded Suite 7.0 for larger embedded devices, providing a middleware stack for embedded systems. Small is the new big! Introducing Oracle Java ME Embedded 3.2  Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. These include: On-the-fly application downloads and updates Remote operation, often in challenging environments Ability to add new capabilities without impacting the existing functions Support for hardware with as little as 130 kB RAM and 350 kB ROM Oracle Java Wireless Client 3.2 Oracle Java Wireless Client 3.2 is built around an optimized Java ME implementation that delivers a feature-rich application environment for mass-market mobile devices. This new release: Leverages standard JSRs, Oracle optimizations/APIs and a flexible porting layer for device specific customizations, which are tuned to device/chipset requirements Supports advanced tooling functions, such as memory and network monitoring and on-device tooling Offers new support for dual SIM functionality, which is highly useful for mass-market devices supported by multiple carriers with multiple phone connections Oracle Java ME SDK 3.2 Oracle Java ME SDK 3.2 provides a complete development environment for both Oracle Java ME Embedded 3.2 and Oracle Java Wireless Client 3.2. Available for download from OTN, The latest version includes: Small embedded device support In-field and remote administration and debugging Java ME SDK plug-ins for Eclipse and the NetBeans Integrated Development Environment (IDE), enabling more application development environments for Java ME developers. A new device skin creator that developers can use to generate their custom device skins for testing their applications. Oracle Embedded Suite 7.0 The Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to and from other embedded devices.The Oracle Java Embedded Suite is a complete device-to-data center solution subset for embedded systems.  See Java Me and Java Embedded in Action Java ME and Java Embedded technologies will be showcased for developers at JavaOne 2012 in over 60 conference sessions and BOFs, as well as in the JavaOne Exhibition Hall. For business decision makers, the new event Java Embedded @ JavaOne you learn more about Java Embedded technologies and solutions.

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • Alfa AWUS036H USB wireless adapter not recognized

    - by GFiasco
    The Alfa AWUS036H USB wireless adapter will not be recognized by my netbook (Ubuntu 14.04, Asus X201E). As I understand it, the drivers should already be built in to this version of Linux, but I tried a make/make install of the latest Realtek drivers (as mentioned on How do I install drivers for the Alfa AWUS036H USB wireless adapter?) and it didn't work. I then followed the advice of this thread (ALFA AWUS036NH driver) and did a make/make install of the most up-to-date backport of the drivers, but that didn't work. At this point I tried a series of commands from this thread (http://ubuntuforums.org/showthread.php?t=2187780) in an attempt to identify the problem, but at no point could I get the laptop to ever recognize the USB adapter. I have also troubleshot the USB cable itself, tried both the USB 2.0 and 3.0 ports on the laptop, have never received an error message regarding a need to update the firmware, and have seemingly successfully installed all manner of variation of Realtek drivers which were supposed to make the adapter work. (I also tried to delete/clean up after each install, in the hope I wasn't making things worse.) Not sure what I should do next. Please let me know if I need to post any more information. Thanks very much for your help. EDIT: Before inserting Alpha USB adapter: :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub After inserting Alpha USB adapter (USB 3.0 port, no change): :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Ran tail -f /var/log/syslog, inserted device, no recognition (last entry is dated 16:17:01, so an hour ago). Going to check on an Ubuntu 14.04 laptop and Windows XP desktop. I'll update after. Thanks for your help to this point.

    Read the article

  • Saddling your mountain lion with JDeveloper

    - by Blueberry Coder
    Last October, Apple released Java Update 2012-006. This patch brought the Apple-provided JDK for OS X Lion v10.7 and OS X Mountain Lion v10.8 to version 1.6.0_37. At the same time, it disabled the Apple Java plugins and removed the Java Preferences panel that enabled users to manage the various Java releases on their computer. On the Windows and Linux platforms, JDeveloper 11g R1 has been certified  to run on Java 7 since patch set 5. This is not the case on OS X.   ( The above is not a typo. Apple's OS for personal computer is now known as OS X; the « Mac » prefix has been dropped with the 10.8 release. And it's pronounced « Oh-Ess-Ten », by the way. Yes, I am a nitpicker. I know... ) Please note JDeveloper 11g R2 is not certified either. On any platform. It will generally work, but there are known issues with ADF Mobile. Personally, I would recommend to wait for 12c before going to JDK 7.  Now, suppose you have installed Oracle's JDK 7 on your Mac. JDeveloper will not run on it. It will even not install. Susan and I discovered this the hard way while setting up the ADF Mobile hands-on lab we ran at the UKOUG 2012 conference. The lab was a great success nevertheless, attracting nearly a hundred delegates. It was great to see the interest ADF Mobile already generates, especially among PL/SQL Developers and DBAs. But what did we do to make it work?  While Java Update 2012-006 removed the Java Preferences panel, it leaved in place OS X's command-line Java infrastructure. Thus, it is possible to invoke the Apple JDK 6 to start the JDeveloper installer. Suppose your user is named « Fred », and that the JDeveloper installer is on your desktop. You can execute the following command in a terminal window (on a single line) to start the installer:  /usr/libexec/java_home --version 1.6.0  --exec java -jar /Users/Fred/Desktop/jdevstudio11116install.jar  The JDeveloper installer, being provided a valid JDK reference, will set up the IDE and embedded WebLogic Server instance accordingly. Clever engineering at its finest!

    Read the article

  • Enabling EUS support in OUD 11gR2 using command line interface

    - by Sylvain Duloutre
    Enterprise User Security (EUS) allows Oracle Database to use users & roles stored in LDAP for authentication and authorization.Since the 11gR2 release, OUD natively supports EUS. EUS can be easily configured during OUD setup. ODSM (the graphical admin console) can also be used to enable EUS for a new suffix. However, enabling EUS for a new suffix using command line interface is currently not documented, so here is the procedure: Let's assume that EUS support was enabled during initial setup.Let's o=example be the new suffix I want to use to store Enterprise users. The following sequence of command must be applied for each new suffix: // Create a local database holding EUS context infodsconfig create-workflow-element --set base-dn:cn=OracleContext,o=example --set enabled:true --type db-local-backend --element-name exampleContext -n // Add a workflow element in the call path to generate on the fly attributes required by EUSdsconfig create-workflow-element --set enabled:true --type eus-context --element-name eusContext --set next-workflow-element:exampleContext -n // Add the context to a workflow for routingdsconfig create-workflow --set base-dn:cn=OracleContext,o=example --set enabled:true --set workflow-element:eusContext --workflow-name exampleContext_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:exampleContext_workflow -n // Create the local database for o=exampledsconfig create-workflow-element --set base-dn:o=example --set enabled:true --type db-local-backend --element-name example -n // Create a workflow element in the call path to the user data to generate on the fly attributes expected by EUS dsconfig create-workflow-element --set enabled:true --set eus-realm:o=example --set next-workflow-element:example --type eus --element-name eusWfe// Add the db to a workflow for routingdsconfig create-workflow --set base-dn:o=example --set enabled:true --set workflow-element:eusWfe --workflow-name example_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:example_workflow -n  // Add the appropriate acis for EUSdsconfig set-access-control-handler-prop \           --add global-aci:'(target="ldap:///o=example")(targetattr="authpassword")(version 3.0; acl "EUS reads authpassword"; allow (read,search,compare) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' dsconfig set-access-control-handler-prop \       --add global-aci:'(target="ldap:///o=example")(targetattr="orclaccountstatusevent")(version 3.0; acl "EUS writes orclaccountstatusenabled"; allow (write) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' Last but not least you must adapt the content of the ${OUD}/config/EUS/eusData.ldif  file with your suffix value then inport it into OUD.

    Read the article

  • eSTEP Newsletter for October 2013 now available

    - by uwes
    Dear Partners,We would like to let you know that the October'13 issue of our Newsletter is now available.The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan VosYou can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Big Companies Influence Retail in 2010

    - by David Dorf
    From a retail industry perspective, 2010 will go down as the year mobile went mainstream, the economy recovered from the crash, and Facebook surpassed Google as the most influential online property. While the economy certainly had the biggest impact on the retail industry, a few big companies also exerted influence. Here's a rundown and a look back at 2010: Apple -- Steve Jobs and company continued to lead the mobile pack. Consumers are using their iPhones to shop, retailers are using the iPod Touch for mobile checkout, and both are embracing the iPad as the next wave of technology. The Next Technology from Apple Mobile Platforms in Retail Apple Stores, Touch2Systems, and the iPad Google -- Not to be outdone, Google's Android platform grew faster than Apple's, plus they support QRCodes natively and will probably beat Apple to NFC. Google Checkout, Product Search, and Boutiques.com continue to impact the e-commerce scene. Google Leverages Like.com Facebook -- While the movie The Social Network certainly made Facebook a household name, Connect, Places, and seeing the "like" button all over the Web really pushed Facebook everywhere. 2010 set the foundations for f-commerce. Facebook Participatory Promotions Crowd Savers What's the value of a Facebook fan? Step Aside Google Leveraging Social Networks for Retail Social Shopping at Nine West Groupon -- This newcomer executed on a simple concept flawlessly, making them the fasted company to reach $1B in revenue. (See cool chart from Silicon Alley Insider.) Google's offer of $5-6B wasn't enough, so now they are raising an additional $1B in funding, presumably to buy-up all the copycats across the globe. Changing the Way We Shop Amazon -- As if leading the e-commerce charge wasn't enough, Amazon shook things up with their purchase of Woot and release of their Price Checker mobile app. They continue to push boundaries with Kindle, and don't seem worried about the iPad at all. You Can't Win on Price Amazon Looks at Your Social Graph eBay -- Acquiring Skype didn't exactly work out, but eBay's purchase of PayPal and RedLaser are driving the company forward. They are still a major force. Bump the Bill Oracle, SAP, HP, IBM, and Cisco left their marks on the retail industry as well with various acquisitions and CxO shake-ups. We'll just have to wait and see what 2011 brings next.

    Read the article

  • ATG Live Webcast Nov. 29th: Endeca "Evolutionizes" E-Business Suite

    - by Bill Sawyer
    If you have ever wanted any of the following within Oracle E-Business Suite: Complete Data View Advanced Searching Across Organizations and Flexfields Advanced Visualization including Charts, Metrics, and Cross Tabs Guided Navigation Then you might want to attend this webcast to learn more about Oracle Endeca's integration with Oracle E-Business Suite. Oracle Endeca includes an unstructured data correlation and analytics engine, together with catalog search and guided navigation capabilities. This webcasts focuses on the details behind Oracle Endeca's integration with Oracle E-Business Suite. It demonstrates how you can extend the use of Oracle Endeca into other areas of Oracle E-Business Suite. Date:             Thursday, November 29, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Osama Elkady, Senior DirectorWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103192To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595335921If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • MatheMagics - Guess My Age Method 1

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 1 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Compute a single digit multiple of 9 – any one of 9, 18, 27, … all the way to 81, will do. ·         Now multiply your age by 10 ·         Subtract the 9 multiple from this number. ·         Tell me the result. Notice that I don’t know which multiple of 9 you subtracted from 10 times your age. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. 10X – 9Y = 10X – 10Y + Y = 10(X – Y) + Y Now remember, you asked an adult, so his/her age is a two digit number (maybe even 3 digits), thus reducing it by the single digit multiplied by nine is still positive – the lowest is can be is 100 – 81 which yields 19. Now make two numbers out of the result. The last digit and the number before it. This number is X – Y or the age minus the single digit you selected. The last digit is this very single digit. This is always so regardless of the digit you selected. So… Add tis digit to the other number and you get back the age! Q.E.D Example: I am 76 years old and here is what happens when I do the steps 76 x 10 = 760 760 – 18 = 742 made of 74 and 2. My age is 74 + 2 760 – 81 = 679 made of 67 and 9. My age is 67 + 9 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age---method-2.aspx

    Read the article

  • Why is my dual-boot Ubuntu partition showing up as a peripheral "root.disk"?

    - by Don
    I recently installed Ubuntu 12.04, which I had been booting from a usb key, as a dual-boot on my machine running Windows 7. From what I had read online while researching, I was prepared to have to shrink the Windows partition and all that. But I never had to - it really was just a few clicks here and there and it was installed. I'm still pretty confused about it, but whatever, it worked, and the two peacefully coexist on my machine, and I have broken things to fix before I worry about fixing unbroken things. So yesterday I got it in my head to look at my partitions (I was considering making an all new partition to install the Windows 8 Release Preview). What I saw confused me. Here's a screenshot of the disk utility. At this moment, there is nothing connected to my computer, and nothing in any of the optical drives/ports/card readers/etc. Can you help me figure out what's going on here? Don's Machine is, I believe, my Windows partition - that's the name I assigned my machine from Windows Explorer. PQSERVICE is from what I can find online also Windows, but having to do with backup. And SYSTEM REQUIRED, if I browse it in Ubuntu, is definitely something to do with booting, and I believe it is also Windows'. According to the sizes shown, those three together should use up my 500 GB HD. Then further down, as a "peripheral device", it lists that 31 GB disk. This is obviously my Ubuntu (Model:Linux Loop:root.disk), but why is it showing up as a peripheral? So, to sum up those questions and to add some more random ones I had: Why is Ubuntu showing up as a peripheral device? If the Windows sections take up all 500 GB, where does Ubuntu live? If I renamed the disk partitions, would my life become a nightmare (seriously - can I safely rename them)? Why didn't I have to resize the Windows partition in the first place? Would giving Ubuntu more space improve its performance (it hangs alot)? Is it possible to have a partition for each OS (Windows 7 & 8, Ubuntu), a partition for files, and a separate partition for backups? Is this towards the good or bad idea end of the spectrum? @Elfy, would that explain why it keeps hanging? I guess I'll backup my files, rip it out, and reinstall it correctly later on today.

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Should EICAR be updated to test the revision of Antivirus system?

    - by makerofthings7
    I'm posting this here since programmers write viruses, and AV software. They also have the best knowledge of heuristics and how AV systems work (cloaking etc). The EICAR test file was used to functionally test an antivirus system. As it stands today almost every AV system will flag EICAR as being a "test" virus. For more information on this historic test virus please click here. Currently the EICAR test file is only good for testing the presence of an AV solution, but it doesn't check for engine file or DAT file up-to-dateness. In other words, why do a functional test of a system that could have definition files that are more than 10 years old. With the increase of zero day threats it doesn't make much sense to functionally test your system using EICAR. That being said, I think EICAR needs to be updated/modified to be effective test that works in conjunction with an AV management solution. This question is about real world testing, without using live viruses... which is the intent of the original EICAR. That being said I'm proposing a new EICAR file format with the appendage of an XML blob that will conditionally cause the Antivirus engine to respond. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-EXTENDED-ANTIVIRUS-TEST-FILE!$H+H* <?xml version="1.0"?> <engine-valid-from>2010-1-1Z</engine-valid-from> <signature-valid-from>2010-1-1Z</signature-valid-from> <authkey>MyTestKeyHere</authkey> In this sample, the antivirus engine would only alert on the EICAR file if both the signature or engine file is equal to or newer than the valid-from date. Also there is a passcode that will protect the usage of EICAR to the system administrator. If you have a backgound in "Test Driven Design" TDD for software you may get that all I'm doing is applying the principals of TDD to my infrastructure. Based on your experience and contacts how can I make this idea happen?

    Read the article

  • EBS Accounts Payables Customer Advisory

    - by cwarticki
    Blogging to let you know of an important set of Oracle Payables patches that were released for R12.1 customers.  Accounts Payable Customer Advisory: Dear Valued Oracle Support Customer, Since the release of R12.1.3 a number of recommended Payables patches have been made available as standalone patches, to help address important business process incidents. Adoption of these patches is highly recommended. To further facilitate adoption of these Payables patches Oracle has consolidated them into a single Recommended Patch Collection (RPC). The RPC is a collection of recommended Payables patches created with the following goals in mind: Stability: Help address issues that are identified by Oracle Development and Oracle Software Support that may interfere with the normal completion of important business processes such as period close. Root Cause Fixes: Help make available root cause fix for data integrity that may delay period close, normal invoice flow and other business actions. Compact: Keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: Collection of patches based on functional area that allows customer to apply, based on their individual needs and goals, all three RPC’s at once or in phases. Payables: -          New AP RPC (14273383:R12.AP.B) has all data corruption root cause fixes known to date plus tons of other crucial fixes (Note: 1397581.1). -          Companion must have RPCs: o   Note: 1481221.1: R12.1: Payments Recommended Patch Collection (IBY RPC), August 2012 o   Note: 1481235.1: R12.1: E-Business Tax Recommended Patch Collection (ZX RPC), August 2012 o   Note: 1481222.1: R12.1: Sub Ledger Accounting (SLA) Recommended Patch Collection (XLA RPC), August 2012 -          This time we beat the system far harder on testing and it held up remarkably well. We could not get any data corruption events in the Invoice Cancel/Discard flow (that is the #1 generator) neither we could cause Orphan Events in the system. Therefore this is very good code. Financials: -          ALL FIN modules now have RPCs: full listing is in (Note: 954704.1)

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • Ubuntu 10.10 not recognizing external hard drive

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command "cat/proc/partitions/" I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the "dmeg" command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run "lsub" command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub bob@bob-desktop:~$

    Read the article

  • Segmentation and Targeting: Your Tools for Personalizing the Online Customer Experience

    - by Christie Flanagan
    In order to deliver the kind of personalized and engaging online experiences that customers expect today, look to segmentation and targeting.  Segmentation is the practice of dividing your site visitors into distinct groups based on shared characteristics or behavior – for example, a segment may consist of site visitors who have visited pages related to certain product type, or they may consist of visitors within the same age group or geographic area.  The idea is that those within a segment are more likely to have common needs, problems or interests that can be served by your business. Targeting is the process by which the most relevant content, whether an article promotion or other piece of content, is delivered to your visitors based on their segment membership. Segmentation and targeting are used to drive greater engagement on your web presence by delivering content to your site visitors that is tailored to their interests, behavior or other attributes.  You may have a number of different goals for your segmentation and targeting efforts: Up-sell or cross-sell to your customers Conduct A/B testing on your offers and creative Offer discounts, promotions or other incentives for the time and duration that you specify Make is easier to find relevant information about products and services Create premium content model There are two different approaches you can take toward segmentation and targeting for you online customer experience initiatives. The first is more of a manual process, in which marketers manage the process of determining which segments to create and which content to target to those segments. The benefit of this approach is that it gives marketers a high level of control over the whole process which works well when you have a thorough understanding of your segments and which content is most likely to serve their needs.  Tools for marketer managed segmentation and targeting are often built right in to your WEM platform, as they are with Oracle WebCenter Sites. The downside is that the more segments and content that you have, the more time consuming and complicated in can be to manage manually.The second approach relies on predictive intelligence to automate the segmentation and targeting process.  This allows optimization of the process to occur in real time. This approach helps reduce the burden of manual segmentation and targeting and can result in new insights into segments that you may never have thought of on your own.  It also provides you with the capability to quickly test new offers and promotions on your site.  Predictive segmentation and targeting can be achieved by using Oracle WebCenter Sites and Oracle Real-Time Decisions together. *****Get a taste for how Oracle WebCenter Sites and Oracle Real-Time Decisions combine to deliver powerful capabilities for predictive segmentation and targeting by watching this on demand webcast introducing Oracle WebCenter Sites 11g or by reading IDC’s take on the latest release of Oracle’s web experience management solution.  Be sure to return to the Oracle WebCenter blog on Thursday for a closer look at how to optimize the online customer experience using these two products together.

    Read the article

  • The Internet Key Wave MW833UP is not recognized in Ubuntu

    - by gio900
    I can't use my Onda MW833UP... :( Any advice? Here is something that someone else may understand: ~$: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric ~$: lsusb Bus 001 Device 005: ID 1ee8:0012 ~$: dmesg [ 22.709475] cdc_acm 1-1:1.0: ttyACM0: USB ACM device [ 22.714856] usbcore: registered new interface driver cdc_acm [ 22.714866] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 23.520490] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 24.244530] usbcore: registered new interface driver usbserial [ 24.244575] USB Serial support registered for generic [ 24.244673] usbcore: registered new interface driver usbserial_generic [ 24.244681] usbserial: USB Serial Driver core [ 24.265879] USB Serial support registered for GSM modem (1-port) [ 24.285680] usbcore: registered new interface driver option [ 24.285691] option: v0.7.2:USB Driver for GSM modems [ 24.425878] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,commit=600 [ 24.736540] EXT4-fs (sda8): re-mounted. Opts: commit=600 [ 35.705796] Easy slow down manager: checking for SABI support. [ 35.706002] Easy slow down manager: SABI is supported (f5189) [ 36.060099] usbcore: deregistering interface driver uvcvideo [ 139.508061] CE: hpet increased min_delta_ns to 20113 nsec [ 6798.378917] usb 1-1: USB disconnect, device number 5 [ 6809.108232] usb 1-1: new high speed USB device number 6 using ehci_hcd [ 6809.242692] scsi5 : usb-storage 1-1:1.0 [ 6810.241257] scsi 5:0:0:0: CD-ROM Onda Datacard CD-ROM 0001 PQ: 0 ANSI: 0 [ 6810.241841] scsi 5:0:0:1: Direct-Access Onda Storage 0001 PQ: 0 ANSI: 0 [ 6810.271410] sr0: scsi3-mmc drive: 0x/0x caddy [ 6810.272099] sr 5:0:0:0: Attached scsi CD-ROM sr0 [ 6810.272852] sr 5:0:0:0: Attached scsi generic sg1 type 5 [ 6810.279954] sd 5:0:0:1: [sdb] Attached SCSI removable disk [ 6810.281210] sd 5:0:0:1: Attached scsi generic sg2 type 0 [ 6810.380591] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.380617] sr: Sense Key : Hardware Error [current] [ 6810.380625] sr: Add. Sense: No additional sense information [ 6810.613937] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.613972] sr: Sense Key : Hardware Error [current] [ 6810.613984] sr: Add. Sense: No additional sense information [ 6810.673716] usb 1-1: USB disconnect, device number 6 [ 6815.572142] usb 1-1: new high speed USB device number 7 using ehci_hcd [ 6815.706828] cdc_acm 1-1:1.0: ttyACM0: USB ACM device The last 3 lines are where I inserted the Internet key, then reconnected it. usb-device T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1ee8 ProdID=0012 Rev=00.01 S: Manufacturer=Onda S: Product=MW833UP S: SerialNumber=9230B35D870F9CB7AE684EACC5C12BE5EC33B26E C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm Then there is /dev/ttyACM0. When the key is connected to the USB port, everything that will meant...

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >