Search Results

Search found 4396 results on 176 pages for 'low poly'.

Page 99/176 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    Register Now for this Webcast. Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Los Angeles Department of Building & Safety (LADBS) is one of the largest construction permitting departments in the country, serving over 350,000 walk-in and 530,000 phone customers, and issuing over 110,000 permits worth $3 Billion every year. LADBS needed a way to migrate walk-in and phone transactions to customer self-service, so they turned to Oracle WebCenter and teamed with Oracle Partner 3Di to deliver a customer self-service portal to lower their cost of customer service operation, while increasing customer satisfaction. Attend this Webcast to learn how Oracle WebCenter has allowed Los Angeles Department of Building & Safety to: Deliver a state of the art customer self-service portal Reduce traffic on high cost, low satisfaction customer service channels Integrate business workflows and legacy applications Register Now for this Webcast. REGISTER NOW Register now for this exclusive event. Wednesday, November 14, 2012 10 a.m. PT / 1 p.m. ET Presented by: Giovani DacumosDirector of Systems, Los Angeles Department of Building & Safety Jing ReyesApplications Development Group Manager, Los Angeles Department of Building & Safety Rajiv Desai CEO, 3Di Sheetal ParanjpyeProject Manager, 3Di Presented by: Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Your Experience Platform

    - by David Dorf
    Crosstalk once again exceeded my expectations, improving upon last year's conference in terms of venue, knowledge sharing, and entertainment.  Its great to see the Oracle Retail family continues to grow, especially outside the US.  I had a great time talking to retailers, analysts, press, and colleagues from around the world. Because the economy, demographics, technology, etc. are constantly changing, retailers must always be evolving their business to capture the next market.  But it takes guts to change something that appears to be working, and it takes a bit of luck to get the timing right.  To a large extent, innovation is about "guts and luck." To help retailers innovate, Oracle Retail provides all the necessary software to create Your Experience Platform.  There is no "Oracle Experience Platform" as each retailer needs something different to deliver on their brand promise.  We provide the actionable insight, optimized operations, and connected interactions, but its still up to the retailer to make it theirs. One such retailer is Masters, a home improvement retailer in Australia formed through a partnership between Woolworths and Lowes.  Woolworths is an established retailer in Australia, so they are already close to their customers and able to understand their needs.  In Australia 74% of dwellings are detached houses and the population is continues to "move up" into bigger and bigger homes. Masters is using Oracle Retail's software to create their experience platform that will deliver on their brand promise, which includes everyday low prices, wide range of products, smarter self-service, and an inviting store environment.  The Oracle Retail software provides the foundation that allows them to rapidly deliver on this promise -- Masters is engineered for success.

    Read the article

  • How can I set external monitor as default?

    - by iJeeves
    I have connected an external monitor to my laptop through HDMI. Currently either my Desktop is getting extended to the external monitor (with native resolution) or low resolution on both when I choose "Same Image in both". How can I ensure that the external monitor is used by default and the laptop monitor just blanks. I generated the xorg.conf file by doing: X -configure The following is the content of xorg.conf.new file generated in my user folder. Should I copy this anywhere? Should I edit the contents? Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "/usr/share/fonts/X11/cyrillic" FontPath "/usr/share/fonts/X11/100dpi/:unscaled" FontPath "/usr/share/fonts/X11/75dpi/:unscaled" FontPath "/usr/share/fonts/X11/Type1" FontPath "/usr/share/fonts/X11/100dpi" FontPath "/usr/share/fonts/X11/75dpi" FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" FontPath "built-ins" EndSection Section "Module" Load "glx" Load "dri2" Load "record" Load "extmod" Load "dbe" Load "dri" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Device" ### Available Driver options are:- ### Values: : integer, : float, : "True"/"False", ### : "String", : " Hz/kHz/MHz", ### : "%" ### [arg]: arg optional #Option "NoAccel" # [] #Option "SWcursor" # [] #Option "ColorKey" # #Option "CacheLines" # #Option "Dac6Bit" # [] #Option "DRI" # [] #Option "NoDDC" # [] #Option "ShowCache" # [] #Option "XvMCSurfaces" # #Option "PageFlip" # [] Identifier "Card0" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Rosegarden plugin list is empty

    - by Wes
    I've just installed rosegarden through apt. I've also installed jack and a low latency kernel through a PPA https://launchpad.net/~abogani/+archive/ppa. I've started jack-server and through the control panel I can see its wiring things through rosegarden. I've also tried installing qsynth and dssi both through apt. However I can't see any plug ins in the synth plug-in list. Therefore I'm unable to test if this works. I've tried launching qsynth before rosegarden and I've tried a few things however I just can't see any plugins. Does anyone know how to get this to work? I'm using ubuntu 11.04 or 11.10 I think. sudo apt-get install rosegarden -o APT::Install-Suggests=true synaptic sudo apt-get install synaptic synaptic sudo synaptic synaptic sudo apt-add-repository ppa:abogani/ppa sudo apt-get install linux-lowlatency sudo apt-get install dssi sudo apt-get install alsa-firmware-loaders alsa-tools alsa-tools-gui alsa-firmware sudo apt-get install alsa-firmware-loaders alsa-tools alsa-tools-gui sudo apt-get install blop caps cmt fil-plugins rev-plugins swh-plugins tap-plugins sudo apt-get install blepvco mcp-plugins omins

    Read the article

  • Should I use procedural animation?

    - by user712092
    I have started to make a fantasy 3d fps swordplay game and I want to add animations. I don't want to animate everything by hand because it would take a lot of time, so I decided to use procedural animation. I would certainly use IK (starting with simple reaching an object with hand ...). I also assume procedural generation of animations will make less animations to do by hand (I can blend animations ...). I want also to have a planner for animation which would simplify complex animations; those which can be split to a sequence - run and then jump, jump and then roll - or which are separable - legs running and torso swinging with sword -. I want for example a character to chop a head of a big troll. If troll crouches character would just chop his head off, if it is standing he would climb on a troll. I know that I would have to describe the state ("troll is low", "troll is high", "chop troll head" ..) which would imply what regions animation will be in (if there is a gap between them character would jump), which would imply what places character can have some of legs and hands or would choose an predefined animation. My main goal is simplicity of coding, but I want my game to be looking cool also. Is it worthy to use procedural animation or does it make more troubles that it solves? (there can be lot of twiddling ...) I am using Blender Game Engine (therefore Python for scripting, and Bullet Physics).

    Read the article

  • Desktop forgets theme?

    - by Marcelo Cantos
    I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matte white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • Google music search: a better way to listen.

    - by anirudha
    somebody who want to listen music  pay much more to some online music store for online listening. otherwise they experience bad or low quality on YouTube. who is illegal  because uploader not have a permission or right to upload the document and their is no guarantee that they not put their ads or quality as same. now forget YouTube and all other because Google music search is much better just go their search the song by movies name or song and just click and listen. the quality is much better then other but it is not Google. the result they put comes from other website. i feel a thing goes wrong in Google music  search  that if i search “sajda” they never show me result about “sadka” because the word in common life use as same both. but the song may be starting from  “sajda” or “sadka”. i thing that they put the link that Do you means “Sadka” when i search sajda that it is better thing just like many online book store show the different keyword related to your keyword when you search their. like you search for a book on online book store they show you some different keyword when they serve the result and show related product or books when you go to a product page. after thinking all it is a better option for user to feel a better quality music without search hassle.

    Read the article

  • .NET development on Macs

    - by Jeff
    I posted the “exciting” conclusion of my laptop trade-ins and issues on my personal blog. The links, in chronological order, are posted below. While those posts have all of the details about performance and software used, I wanted to comment on why I like using Macs in the first place. It started in 2006 when Apple released the first Intel-based Mac. As someone with a professional video past, I had been using Macs on and off since college (1995 graduate), so I was never terribly religious about any particular platform. I’m still not, but until recently, it was staggering how crappy PC’s were. They were all plastic, disposable, commodity crap. I could never justify buying a PowerBook because I was a Microsoft stack guy. When Apple went Intel, they removed that barrier. They also didn’t screw around with selling to the low end (though the plastic MacBooks bordered on that), so even the base machines were pretty well equipped. Every Mac I’ve had, I’ve used for three years. Other than that first one, I’ve also sold each one, for quite a bit of money. Things have changed quite a bit, mostly within the last year. I’m actually relieved, because Apple needs competition at the high end. Other manufacturers are finally understanding the importance of industrial design. For me, I’ll stick with Macs for now, because I’m invested in OS X apps like Aperture and the Mac versions of Adobe products. As a Microsoft developer, it doesn’t even matter though… with Parallels, I Cmd-Tab and I’m in Windows. So after three and a half years with a wonderful 17” MBP and upgraded SSD, it was time to get something lighter and smaller (traveling light is critical with a toddler), and I eventually ended up with a 13” MacBook Air, with the i7 and 8 gig upgrades, and I love it. At home I “dock” it to a Thunderbolt Display. A new laptop .NET development on a Retina MacBook Pro with Windows 8 Returning my MacBook Pro with Retina display .NET development on a MacBook Air with Windows 8

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • ORA-600 Troubleshooting

    - by [email protected]
    Have you observed an ORA-0600 or ORA-07445 reported in your alert log? The ORA-600 error is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The ORA-600 error statement includes a list of arguments in square brackets: ORA 600 "internal error code, arguments: [%s], [%s],[%s], [%s], [%s]" The first argument is the internal message number or character string. This argument and the database version number are critical in identifying the root cause and the potential impact to your system.  The remaining arguments in the ORA-600 error text are used to supply further information (e.g. values of internal variables etc).   Looking for the best way to diagnose? There is an ORA-600 Troubleshooter Tool available in My Oracle Support.  This tool will lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out the first 10 or 15 stack pointers from the associated trace file to match up against known bugs. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] Also, take a quick look at the Master Note for Diagnosing ORA-600 ( MasterNoteORA600.docx) for some tips on diagnosing.

    Read the article

  • ATI Catalyst driver 12.8 is not using hardware acceleration on Precise

    - by Jack Wright
    I've been using Ubuntu and ATI Catalyst for years. On my clean install of Ubuntu 12.04 I've noticed that Catalyst 12.6 and then 12.8 are not actually using my HD5750 GPU for hardware acceleration - high CPU usage, zero GPU load. Everything installed correctly with no hassles, fglrxinfo and vainfo are correct as per this HowTo for Precise. I have an Ubuntu 10.04 with Catalyst 12.6 installation on the same hardware which does use the GPU - low CPU usage, high GPU load when transcodeing video files or playing video content. The VA-API drivers are not installed on the 10.04 build. They are not mentioned in this HowTo for Lucid. fgl_glxgears frame rates on Precise are a fifth of the rates on Lucid. LUCID jw@Kworld:~$ fgl_glxgears Using GLX_SGIX_pbuffer 16867 frames in 5.0 seconds = 3373.400 FPS 12523 frames in 5.0 seconds = 2504.600 FPS 13763 frames in 5.0 seconds = 2752.600 FPS PRECISE jw@NewWorld12:~$ fgl_glxgears Using GLX_SGIX_pbuffer 12905 frames in 5.0 seconds = 2581.000 FPS 3230 frames in 5.0 seconds = 646.000 FPS 517 frames in 5.0 seconds = 103.400 FPS 518 frames in 5.0 seconds = 103.600 FPS 6489 frames in 5.0 seconds = 1297.800 FPS This is glxgears running in fullscreen. In Lucid (10.04) I can't see the gears, they are spinning so fast, but in Precise (12.04) they are really sluggish. Has anyone else noticed a problem like this? Cheers, Jack.

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • An Oracle Event for Your Facility & Equipment Maintenance Staff

    - by Mark Rosenberg
    The 7th Annual Oracle Maintenance Summit will occur February 4 – 6, 2013 at the Hyatt Regency San Francisco. This year, the Maintenance Summit will be one of the major pillars of a larger Oracle Value Chain Summit. What makes this event different from the other events hosted by Oracle and the PeopleSoft Community’s various user groups is that it is specifically meant to provide a venue for the facility and equipment maintenance community to talk about all things related to maintenance.  Maintenance Planners, Maintenance Schedulers, Vice Presidents and Directors of Physical Plant, Operations Managers, Craft Supervisors, IT management, and IT analysts typically attend this event and find it to be a very valuable experience. The Maintenance pillar will provide the same atmosphere and opportunity to hear from PeopleSoft Maintenance Management customers, Oracle Product Strategy, and partners, as in past years.  For more information, you can access the registration website for the Value Chain Summit. For existing PeopleSoft Maintenance Management customers…if you are interested in participating in the PeopleSoft Maintenance Management Focus Group in which Oracle discusses product roadmap topics with the community of customers who have licensed the PeopleSoft Maintenance Management application, please contact [email protected], [email protected], or [email protected]. The Focus Group will meet on February 7th, and attendance is by invitation only.We look forward to seeing you in San Francisco! P.S.  The Early Bird registration fee is $195. Register before December 31 to take advantage of this introductory low price, as the registration fee will go up to $295 after that date.

    Read the article

  • Have unit test generators helped you when working with legacy code?

    - by Duncan Bayne
    I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code.

    Read the article

  • Javascript: Machine Constants Applicable?

    - by DavidB2013
    I write numerical routines for students of science and engineering (although they are freely available for use by anybody else as well) and am wondering how to properly use machine constants in a JavaScript program, or if they are even applicable. For example, say I am writing a program in C++ that numerically computes the roots of the following equation: exp(-0.7x) + sin(3x) - 1.2x + 0.3546 = 0 A root-finding routine should be able to compute roots to within the machine epsilon. In C++, this value is specified by the language: DBL_EPSILON. C++ also specifies the smallest and largest values that can be held by a float or double variable. However, how does this convert to JavaScript? Since a Javascript program runs in a web browser, and I don't know what kind of computer will run the program, and JavaScript does not have corresponding predefined values for these quantities, how can I implement my own version of these constants so that my programs compute results to as much accuracy as allowed on the computer running the web browser? My first draft is to simply copy over the literal constants from C++: FLT_MIN: 1.17549435082229e-038 FLT_MAX: 3.40282346638529e+038 DBL_EPSILON: 2.2204460492503131e-16 I am also willing to write small code blocks that could compute these values for each machine on which the program is run. That way, a supercomputer might compute results to a higher accuracy than an old, low-level, PC. BUT, I don't know if such a routine would actually reach the computer, in which case, I would be wasting my time. Anybody here know how to compute and use (in Javascript) values that correspond to machine constants in a compiled language? Is it worth my time to write small programs in Javascript that compute DBL_EPSILON, FLT_MIN, FLT_MIN, etc. for use in numerical routines? Or am I better off simply assigning literal constants that come straight from C++ on a standard Windows PC?

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • What are the best Microsoft Certifications to start with?

    - by emragins
    Background I have a bachelors in math and a certification. in C++ from 2007. Since then I've spent a lot of time working with python, C#, and started going through the ASP.NET certification materials. I'm starting to realize that the certification is going to take longer than anticipated and I'm not sure I want to spend the next 4-5 months studying before I have it completed. Most of resume shows teaching/tutoring experience with some low-level administration thrown in. Question If I want to get any programming position, which certifications would be best to start with? What would be the quickest and easiest to obtain, yet represent value for my employer? Are certifications even the way to go? If not, what would you suggest? Update I have several programs that I show off when I can (mostly games), and I'm about 75% through a C# application I hope to have done in the next week. Since most employers simply ask for a resume and not samples, what would be the best way to present the work to them?

    Read the article

  • How will my Electronic Engineering degree be received in the Canadian Game Development market? [closed]

    - by Harikawashi
    I have a Electronic Engineering with Computer Science Degree from a reputable South African university. The EE with CS degree is basically Electronic Engineering, with some of the high voltage subjects thrown out and replaced with computer science subjects - mostly quite theoretical, but not in too much depth. I went on to earn a Masters Degree in Digital Signal Processing, focussing on Speech Recognition in Educational Applications. I have always loved programming - I taught myself QBASIC when I was in primary school, I learned Java at school, did some low level C at University, and taught myself C# and Python while doing my post graduate degree. C# is currently my strong suit, I think I am pretty capable with it. I have two years work experience in Namibia - working as a consulting electrical engineer (no software content whatsoever) and also developing C# desktop applications for the company I work for. I would like to move to Canada next year and work in the Game Development Industry as programmer or software engineer. My interests in particular are towards the more mathematical applications, like game and physics engines, or statistical disciplines like artificial intelligence. However, these are passions - not areas in which I have any work experience. So the question: How well will my BEngEE&CS and MScEng be received in the game industry? Seeing as it's not a pure software degree and I have no official software development work experience?

    Read the article

  • Microsoft IntelliMouse episodic pauses

    - by Rob Hills
    I have a Microsoft IntelliMouse connected via USB to a computer (directly, NOT via hub) currently running Ubuntu 11.10, but this problem also existed before we upgraded from 10.10. Every now and then (apparently randomly) the computer "pauses" for anything up to a few seconds. This usually occurs after a mouse movement and during the pause, the computer is completely unresponsive to mouse or keyboard. lsusb shows: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 0409:0058 NEC Corp. HighSpeed Hub Bus 001 Device 004: ID 05e3:0605 Genesys Logic, Inc. USB 2.0 Hub [ednet] Bus 003 Device 013: ID 045e:001e Microsoft Corp. IntelliMouse Explorer Bus 001 Device 005: ID 04a9:1097 Canon, Inc. PIXMA iP5000 Bus 001 Device 006: ID 0a5c:200a Broadcom Corp. Bluetooth dongle Bus 001 Device 007: ID 0911:1c57 Philips Speech Processing Bus 001 Device 008: ID 04a9:2219 Canon, Inc. CanoScan 9950F so the mouse appears to be correctly identified. Syslog episodically shows the following sequence: Jan 15 11:48:32 kayes-computer kernel: [10588.512036] usb 3-1: USB disconnect, device number 10 Jan 15 11:48:33 kayes-computer kernel: [10589.248026] usb 3-1: new low speed USB device number 11 using uhci_hcd Jan 15 11:48:33 kayes-computer mtp-probe: checking bus 3, device 11: "/sys/devices/pci0000:00/0000:00:1d.1/usb3/3-1" Jan 15 11:48:33 kayes-computer kernel: [10589.448596] input: Microsoft Microsoft IntelliMouse® Explorer as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input11 Jan 15 11:48:33 kayes-computer kernel: [10589.448706] generic-usb 0003:045E:001E.000B: input,hidraw0: USB HID v1.00 Mouse [Microsoft Microsoft IntelliMouse® Explorer] on usb-0000:00:1d.1-1/input0 Jan 15 11:48:33 kayes-computer mtp-probe: bus: 3, device: 11 was not an MTP device though I can't confirm if these are directly associated with the "pauses". Any thoughts on what might be causing this or what else I can do to diagnose the problem?

    Read the article

  • Oracle JDK 7u10 released with new security features

    - by Henrik Stahl
    A few days ago, we released JRE and JDK 7 update 10. This release adds support for the following new platforms: Windows 8 on x86-64. Note that Modern UI (aka Metro) mode is not supported. Internet Explorer 10 on Windows 8. Mac OS X 10.8 (Mountain Lion) This release also introduces new features that provide enhanced security for Java applet and webstart applications, specifically: The Java runtime tracks if it is updated to the latest security baseline. If you try to execute an unsigned applet with an outdated version of Java, a warning dialog will prompt you to update before running the applet. The Java runtime includes a hardcoded best before date. It is assumed that a new version will be released before this date. If the client has not been able to check for an update prior to this date, the Java runtime will assume that it is insecure and start warning the user prior to executing any applets. The Java control panel now includes an option to set the desired security level on a low-medium-high-very high scale, as well as an option to disable Java applets and webstart entirely. This level controls things such as if the Java runtime is allowed to execute unsigned code, and if so what type of warning will be displayed to the user. More details on the security settings can be found in the documentation. See below for a sample screenshot. The new update of the JRE and the JDK are available via OTN. To learn more about the release please visit the release notes.

    Read the article

  • Internet Explorer will not open Office files

    - by geekrutherford
    An issue was brought to my attention today at work where certain users were unable to open Office files (specifically Excel) from Internet Explorer 7.   The user would click on a button which simply generated an inline JS call to open a pop-up pointing to the .xlsx file on the server. IE would open the pop-up and then shortly thereafter the pop-up would disappear without the file ever opening.   I tweaked the security settings in the users browser...added the site to the list of trusted sites and lowered the security settings to Medium-Low. This allowed IE to at least prompt with the Save or Open message. Clicking either open resulted in "Internet Explorer Could Not Open the Site...".   Perturbed, I retreated back to Geek Central (aka my desk) and modified my application such that instead of simply pointing the browser to the file and now used Response.TransmitFile() to stream it to the browser instead. I thought to myself "this is perfect, it has to work!!!". Alas, no luck.   Bewildered and confused and returned to the lone users computer and started looking around the various IE options. I stumbled upon "Clear SSL State" under the "Content" tab. This appears to clear out all SSL certificates on the client forcing it to refresh. Doing this in concert with resetting the security levels for all zones back to their defaults seemed to do the trick.

    Read the article

  • The Science Behind Salty Airline Food

    - by Jason Fitzpatrick
    In this collection, Artist Signe Emma combines a scientific overview of the role salt plays in airline food with electron microscope scans of salt crystals arranged to look like the views from an airplane–a rather clever and visually stunning way to deliver the message. Attached to the collection is this explaination of why airlines load their snacks and meals with salt: White noise consists of a random collection of sounds at different frequencies and scientists have demonstrated that it is capable of diminishing the taste of salt. At low-pressure conditions, higher taste and odour thresholds of flavourings are generally observed. At 30.000 feet the cabin humidity drops by 15%, and the lowered air pressure forces bodily fluids upwards. With less humidity, people have less moisture in their throat, which slows the transport of odours to the brains smell and taste receptors. That means that if a meal should taste the same up in the air, as on ground it needs 30% of extra salt. To combat the double assault on our sense of taste, the airlines boost the salt content to compensate. For more neat microscope scans as high-altitude view photographs, hit up the link below. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >