Search Results

Search found 28875 results on 1155 pages for 'oracle event'.

Page 600/1155 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • Raspberry Pi and Java SE: A Platform for the Masses

    - by Jim Connors
    One of the more exciting developments in the embedded systems world has been the announcement and availability of the Raspberry Pi, a very capable computer that is no bigger than a credit card.  At $35 US, initial demand for the device was so significant, that very long back orders quickly ensued. After months of patiently waiting, mine finally arrived.  Those initial growing pains appear to have been fixed, so availability now should be much more reasonable. At a very high level, here are some of the important specs: Broadcom BCM2835 System on a chip (SoC) ARM1176JZFS, with floating point, running at 700MHz Videocore 4 GPU capable of BluRay quality playback 256Mb RAM 2 USB ports and Ethernet Boots from SD card Linux distributions (e.g. Debian) available So what's taking place taking place with respect to the Java platform and Raspberry Pi? A Java SE Embedded binary suitable for the Raspberry Pi is available for download (Arm v6/7) here.  Note, this is based on the armel architecture, a variety of Arm designed to support floating point through a compatibility library that operates on more platforms, but can hamper performance.  In order to use this Java SE binary, select the available Debian distribution for your Raspberry Pi. The more recent Raspbian distribution is based on the armhf (hard float) architecture, which provides for more efficient hardware-based floating point operations.  However armhf is not binary compatible with armel.  As of the writing of this blog, Java SE Embedded binaries are not yet publicly available for the armhf-based Raspbian distro, but as mentioned in Henrik Stahl's blog, an armhf release is in the works. As demonstrated at the just-completed JavaOne 2012 San Francisco event, the graphics processing unit inside the Raspberry Pi is very capable indeed, and makes for an excellent candidate for JavaFX.  As such, plans also call for a Pi-optimized version of JavaFX in a future release too. A thriving community around the Raspberry Pi has developed at light speed, and as evidenced by the packed attendance at Pi-specific sessions at Java One 2012, the interest in Java for this platform is following suit. So stay tuned for more developments...

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • Portal Server comparisons / TCoO

    - by Scott
    We have a client whom is looking to incorporate Oracle Portal into our next release. I'm newer to this team, but the team is currently working with Apache, so whichever Portal Server we choose will likely incur a bit of a learning curve. Is there any comparison (not marketing) out there which discusses the differences in the servers and/or the total cost of ownership on them? With 5 developers, installing RAD becomes expensive, which I'd assume they'd wish to move onto us with the change to Oracle Portal and WebSphere.

    Read the article

  • SmartView 11.1.2.2.103 - Support for MS Office 64 added

    - by THE
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 (thanks to Nancy, who shared this with me)  New for Smart View v11.1.2.2.103, Patch 14362638,   Microsoft Office 64-bit is now supported:  Information for 64-Bit Microsoft Office Installations: In this release, Smart View supports the 64-bit version on Microsoft Office. If you use 64-bit Office, please note the following: Oracle provides separate Smart View installation files for 64-bit and 32-bit Office systems. . smartview-x64.exe is the file for 64-bit Office installations. smartview.exe is the file for 32-bit Office installations. The 64-bit version of Smart View pertains only to the 64-bit version of Microsoft Office and not to the version of the operating system. Customers with 64-bit operating systems and the 32-bit version of Microsoft Office should install the 32-bit version of Smart View. You cannot install the 64-bit version of Smart View from EPM Workspace (13530466). Although Planning Offline is supported for 64-bit operating systems, it is not supported for 64-bit Smart View installations. If you use Planning Offline with Smart View, you must use the 32-bit version of Smart View and the 32-bit version of Microsoft Office. In 64-bit versions of Excel 2010 SP1, the presence of Smart View functions may cause Excel to terminate abruptly and may prevent Copy Data Point and Paste Data Point functions from working. This is a Microsoft issue, and a service request has been filed with Microsoft. Workaround: Until the Microsoft fix, use the 32-bit version of Smart View. (13606492) The Smart View function migration utility is not supported on 64-bit Office. (14342207) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • MySQL Connect Keynotes and Presentations Available Online

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following the tremendous success of MySQL Connect, you can now watch some of the keynotes online: The State of the Dolphin – by Oracle Chief Corporate Architect Edward Screven and MySQL Vice President of Engineering Tomas Ulin 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} MySQL Perspectives – featuring power users of MySQL who share their experiences and perspectives: Jeremy Cole, DBA Team Manager, Twitter Daniel Austin, Chief Architect, PayPal Ash Kanagat, IT Director; and Shivinder Singh, Database Architect, Verizon Wireless You can also access slides from a number of MySQL Connect presentations in the Content Catalog. Missing ones will be added shortly (provided the speakers consented to it). Enjoy!

    Read the article

  • Do I need DELL OpenManage to generate snmp traps on RAID degradation?

    - by jishi
    I need to setup surveillance on all our servers to spot any RAID degradation in time. However, not all of our servers have OpenManage installed, and since they are in production I don't like the idea of installing it on them. Therefor: Is it necessary to have it installed in order to get an event-log for any degradation of the RAID? Because, if I get an event-log I can send an SNMP trap, if I understand it correctly. I thought it was the driver that responsible for the event-logging, but on a machine that recently had a degradation, I can't seem to find any log event for it.

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Windows Service SearchIndexer.exe Crashes on Indexing

    - by Josh Jay
    Relevant Specs: Windows 7 Professional 64-bit SP1 Outlook 2010 Version 14.0.7116.5000 (32-bit) Original Symptom: In outlook, I attempted to search for an email but nothing ever returned and the indicator kept going like it was searching. Attempted Resolutions: I investigated the search options and with some research noticed the Windows Service "Windows Search" (SearchIndexer.exe) was not running. I attempted to start it but I receive this error message: "Windows could not start the Windows Search service on Local Computer. Error 1067: The process terminated unexpectedly." The Event Viewer gives this error entry: Log Name: Application Source: Application Error Date: 6/3/2014 11:02:05 AM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ***REMOVED FOR POST*** Description: Faulting application name: SearchIndexer.exe, version: 7.0.7601.17610, time stamp: 0x4dc0d019 Faulting module name: KERNELBASE.dll, version: 6.1.7601.18229, time stamp: 0x51fb1677 Exception code: 0xc0000005 Fault offset: 0x000000000000940d Faulting process id: 0x6a0 Faulting application start time: 0x01cf7f3cc83757c6 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 06424160-eb30-11e3-9555-843a4b07b336 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2014-06-03T15:02:05.000000000Z" /> <EventRecordID>602923</EventRecordID> <Channel>Application</Channel> <Computer>M6700-12011.ncaa.org</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7601.17610</Data> <Data>4dc0d019</Data> <Data>KERNELBASE.dll</Data> <Data>6.1.7601.18229</Data> <Data>51fb1677</Data> <Data>c0000005</Data> <Data>000000000000940d</Data> <Data>6a0</Data> <Data>01cf7f3cc83757c6</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\system32\KERNELBASE.dll</Data> <Data>06424160-eb30-11e3-9555-843a4b07b336</Data> </EventData> </Event> The regular windows search (from start menu) works fine, and if I reboot the machine the service starts up OK but as soon as it kicks off when I let the machine idle for long enough it crashes (same Event Viewer entry). We also tried the Microsoft Utility to no avail. Has anyone seen this issue before?

    Read the article

  • 48hrs in Cambridge.

    - by Fatherjack
    In just over 2 weeks something pretty big in the SQL Server Community in the UK is taking place. We are going to witness the first SQL Saturday on these shores. The event is running in Cambridge, the home of the SQL Cambs user group and the chapter leader there (Mark Broadbent) is the lead on the SQL Saturday event too. Mark and his team are making final preparations and looking forward to this event getting started with the Pre-Con day on Friday 7th Sept. They have 3 great sessions from Buck Woody, Jen Stirrup and Mark Rasmussen for those lucky enough to be able to attend on the Friday. There are over 30 speakers providing 4 tracks of sessions on the Saturday so there will be plenty to interest and inform anyone working with SQL Server, take a look at all the sessions on the schedule. In addition to all of this you will be able to spend some quality time talking to all the other attendees, sponsors and PASS representatives to make the most of your time there. If you haven’t registered yet then head over to http://sqlcambs.org.uk/ and get your name down to attend this milestone event.

    Read the article

  • Analytics - Where do my drop offs go?

    - by BadCash
    I have a website set up with Google Analytics (through the Wordpress plugin "Google Analytics for WordPress" by Joos de Valk). When I check out the visitors flow in Google Analytics, it shows something like this: (home) - 43% drop-offs /page-2/ - 10% drop-offs ... etc ... I have also set up events for external links. My main "goal" of the website is to drive traffic to my Android app on Google Play, so I have a couple of different links to that that are all set up as events. Everything seems to be working, my events show up when I go to Content - Events in Google Analytics. However, it seems to me that some percentage of the users that are reported as "drop-offs" in fact have clicked on one of the external links. But there's no info about the reason of those drop-offs in the Visitors flow-chart. I can of course check out each specific event category, event action and set "other" to Content/Page, which (I guess) shows the number of visitors who triggered a specific event on a specific page. It just seems like such a complicated way of going about this! So, is there a way to get a more detailed picture, including events, in the Visitors flow chart? Something like: (home) - 43% drop-offs Event Action: "Google Play"=50%, "Youtube"=10%, (not set)=40%

    Read the article

  • Saddling your mountain lion with JDeveloper

    - by Blueberry Coder
    Last October, Apple released Java Update 2012-006. This patch brought the Apple-provided JDK for OS X Lion v10.7 and OS X Mountain Lion v10.8 to version 1.6.0_37. At the same time, it disabled the Apple Java plugins and removed the Java Preferences panel that enabled users to manage the various Java releases on their computer. On the Windows and Linux platforms, JDeveloper 11g R1 has been certified  to run on Java 7 since patch set 5. This is not the case on OS X.   ( The above is not a typo. Apple's OS for personal computer is now known as OS X; the « Mac » prefix has been dropped with the 10.8 release. And it's pronounced « Oh-Ess-Ten », by the way. Yes, I am a nitpicker. I know... ) Please note JDeveloper 11g R2 is not certified either. On any platform. It will generally work, but there are known issues with ADF Mobile. Personally, I would recommend to wait for 12c before going to JDK 7.  Now, suppose you have installed Oracle's JDK 7 on your Mac. JDeveloper will not run on it. It will even not install. Susan and I discovered this the hard way while setting up the ADF Mobile hands-on lab we ran at the UKOUG 2012 conference. The lab was a great success nevertheless, attracting nearly a hundred delegates. It was great to see the interest ADF Mobile already generates, especially among PL/SQL Developers and DBAs. But what did we do to make it work?  While Java Update 2012-006 removed the Java Preferences panel, it leaved in place OS X's command-line Java infrastructure. Thus, it is possible to invoke the Apple JDK 6 to start the JDeveloper installer. Suppose your user is named « Fred », and that the JDeveloper installer is on your desktop. You can execute the following command in a terminal window (on a single line) to start the installer:  /usr/libexec/java_home --version 1.6.0  --exec java -jar /Users/Fred/Desktop/jdevstudio11116install.jar  The JDeveloper installer, being provided a valid JDK reference, will set up the IDE and embedded WebLogic Server instance accordingly. Clever engineering at its finest!

    Read the article

  • Hour-long shutdown duration "shutting down hyper-v virtual machine management service"

    - by icelava
    I have a Windows 2008 R2 server that is a Hyper-V host (Dell PowerEdge T300). Today for the first time I encountered an odd situation; i lost connection with one of the guest machines but logging on physically it seems the guest OS is still running but no longer contactable via the network. I tried to shut down the guest machine (Windows XP) but it would not shut down, getting stuck in a "Not responding" dialog box that cannot be dismissed. I used the Hyper-V management console to reset the machine and it could not get out of resetting state. I tried to save another Windows 2003 guest machine, and it would be progress with its Saving state (0%). The other running Windows 2003 guest was stuck in the logon dialog. My first suspicion is perhaps one of the Windows update patches this week (10 Nov 2011) may something to do with it, which was still pending a system restart. Well, since I could not do anything with Hyper-V i proceeded with the Windows Update restart, and now it is stuck half an hour at "Shutting down hyper-v virtual machine management service" Prior to restarting I did not observe any hard disk errors reported in the system event log; doubt it is a disk-related condition. Shall I force a hard reboot? UPDATE Ok so i left it hanging over an hour while attending to other matters, and thankfully the host cleanly restarted. I can operate the guest machines fine now. Phew. Hyper-V must have been crawling for some reason. The VMs have been observed to become slow in the past when the host has been up for a long duration (two weeks to a month), but never this slow. Would love to know what types of performance monitoring items i can observe to give a hint why this can happen. UPDATE 2012-02-13 In the months ever since, Hyper-V has stalled into this state another two times. It appears so randomly and without any error event logs to hint what is causing it enter this "drunkard" state. Just an Hyper-V management service timeout. Log Name: System Source: Service Control Manager Date: 13/2/2012 9:16:48 AM Event ID: 7043 Task Category: None Level: Error Keywords: Classic User: N/A Computer: elune Description: The Hyper-V Virtual Machine Management service did not shut down properly after receiving a preshutdown control. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7043</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2012-02-13T01:16:48.882901900Z" /> <EventRecordID>567844</EventRecordID> <Correlation /> <Execution ProcessID="764" ThreadID="8484" /> <Channel>System</Channel> <Computer>elune</Computer> <Security /> </System> <EventData> <Data Name="param1">Hyper-V Virtual Machine Management</Data> </EventData> </Event> The only means out of it is to restart the system.

    Read the article

  • Big Companies Influence Retail in 2010

    - by David Dorf
    From a retail industry perspective, 2010 will go down as the year mobile went mainstream, the economy recovered from the crash, and Facebook surpassed Google as the most influential online property. While the economy certainly had the biggest impact on the retail industry, a few big companies also exerted influence. Here's a rundown and a look back at 2010: Apple -- Steve Jobs and company continued to lead the mobile pack. Consumers are using their iPhones to shop, retailers are using the iPod Touch for mobile checkout, and both are embracing the iPad as the next wave of technology. The Next Technology from Apple Mobile Platforms in Retail Apple Stores, Touch2Systems, and the iPad Google -- Not to be outdone, Google's Android platform grew faster than Apple's, plus they support QRCodes natively and will probably beat Apple to NFC. Google Checkout, Product Search, and Boutiques.com continue to impact the e-commerce scene. Google Leverages Like.com Facebook -- While the movie The Social Network certainly made Facebook a household name, Connect, Places, and seeing the "like" button all over the Web really pushed Facebook everywhere. 2010 set the foundations for f-commerce. Facebook Participatory Promotions Crowd Savers What's the value of a Facebook fan? Step Aside Google Leveraging Social Networks for Retail Social Shopping at Nine West Groupon -- This newcomer executed on a simple concept flawlessly, making them the fasted company to reach $1B in revenue. (See cool chart from Silicon Alley Insider.) Google's offer of $5-6B wasn't enough, so now they are raising an additional $1B in funding, presumably to buy-up all the copycats across the globe. Changing the Way We Shop Amazon -- As if leading the e-commerce charge wasn't enough, Amazon shook things up with their purchase of Woot and release of their Price Checker mobile app. They continue to push boundaries with Kindle, and don't seem worried about the iPad at all. You Can't Win on Price Amazon Looks at Your Social Graph eBay -- Acquiring Skype didn't exactly work out, but eBay's purchase of PayPal and RedLaser are driving the company forward. They are still a major force. Bump the Bill Oracle, SAP, HP, IBM, and Cisco left their marks on the retail industry as well with various acquisitions and CxO shake-ups. We'll just have to wait and see what 2011 brings next.

    Read the article

  • Does IIS Sometimes Allocate More Worker Processes Than Configured?

    - by Paul Williams
    We have an IIS 7.5 web service on Windows Server 2008 that handles WCF requests from C# clients. This service is configured to have Maximum Worker Processes = 1, so it is not a web garden. IIS is setup to recycle itself at the same time every day (3 AM). I am trying to debug gnarly connection issues, so I wanted to be sure the application pool was not recycling itself. I configured the pool to log an event when it recycles itself. To my surprise, I see the following entries in the System event log: Level: Information Date/Time: 3/23/2012 3:00:00 AM - Source: WAS - Event ID: 5076 A worker process with process id of '6636' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. Level: Information Date/Time: 3/23/2012 2:59:39 AM - Source: WAS - Event ID: 5076 A worker process with process id of '9364' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. IIS is correctly recycling the application pool at 3 AM. However, I do not understand why I would be getting two recycle events in the log within a few seconds of each other. The maximum number of processes is 1. Does IIS sometimes allocate multiple processes for an application pool that is specified as having 1 process? -- edit -- I connected at about 4 PM today and only saw 1 w3wp.exe process. There are no other event log entries that would indicate a crash.

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Use an audio/video file from a Linux laptop via USB to be played by Magic Sing ET-23H

    - by AisIceEyes
    I am one of the technical directors of a regular karaoke contest event. For the karaoke contest itself, due to tight budget, we are using what one of the sponsors are providing - Magic Sing ET-23H . The video output of the Magic Sing ET-23H are broadcasted at two big screens that are being shown to the audience and event attendees. When a karaoke contestant provides his / her karaoke video, the video itself is in a readable USB flashdrive and is attached to the USB input of Magic Sing ET-23H. What really bugs me is that the interface of Magic Sing ET-23H are also being broadcasted at the big screen video feeds. The interface of choosing the video file is being seen in the Magic Sing ET-23H - also to the big video screens that are seen by the audience and event goers. I will post in the comments ( if my less than 10 reputation would allow me) the picture of Magic Sing ET-23KH USB input of the device. I always bring my laptop, Acer AS5742-7653, during the regular karaoke event. I'm using my laptop also for tallying of scores from the judges, and also playing audio files from contestants that did not provide a karaoke video. I personally am using different Linux distros, but I next to all the time use my Ubuntu Studio 12.04.3 64bit partition during the regular karaoke contest event. My question is this: Is there a way I can share a temporary video/audio file directly from the laptop I'm using, going to the Magic Sing ET-23H that can broadcast both the video/audio file? Just like how in Window's Avisynth AVS files, or VirtualDub's temporary avi file, or like using ffplay (of ffmpeg), etc. I have researched somewhat the matter and found links in SuperUser.com. Though I can only provide the links at the comments section of this post if my reputation of less than 10 would allow me. I have a hunch it is possible, but I have not fully understood the device being used at the event, Magic Sing ET-23H, if there are other ways for it to broadcast video and audio files besides its USB input. Any help to my current predicament is highly appreciated. Thank you. PS: Since I need at least 10 reputation to post more than 2 links and also post images, I will try to post the image & links at the comments (if my below 10 reputation would allow me).

    Read the article

  • Seven Accounting Changes for 2010

    - by Theresa Hickman
    I read a very interesting article called Seven Accounting Changes That Will Affect Your 2010 Annual Report from SmartPros that nicely summarized how 2010 annual financial statements will be impacted.  Here’s a Reader’s Digest version of the changes: 1.  Changes to revenue recognition if you sell bundled products with multiple deliverables: Old Rule: You needed to objectively establish the “fair value” of each bundled item. So if you sold a dishwasher plus installation and could not establish the fair value of the installation, you might have to delay recognizing revenue of the dishwasher days or weeks later until it was installed. New Rule (ASU 2009-13): “Objective” proof of each service or good is no longer required; you can simply estimate the selling price of the installation and warranty. So the dishwasher vendor can recognize the dishwasher revenue immediately at the point of sale without waiting a few weeks for the installation. Then they can recognize the estimated value of the installation after it is complete. 2.  Changes to revenue recognition for devices with embedded software: Old Rule: Hardware devices with embedded software, such as the iPhone, had to follow stringent software revrec rules. This forced Apple to recognize iPhone revenues over two years, the period of time that software updates were provided. New Rule (ASU 2009-14): Software revrec rules no longer apply to these devices with embedded software; these devices can now follow ASU 2009-13. This allows vendors, such as Apple, to recognize revenue sooner. 3.  Fair value disclosures: Companies (both public and private) now need to spend extra time gathering, summarizing, and disclosing information about items measured at fair value, such as significant transfers in and out of Level 1(quoted market price), Level 2 (valuation based on observable markets), and Level 3 (valuations based on internal information). 4.  Consolidation of variable interest entities (a.k.a special purpose entities): Consolidation rules for variable interest entities now require a qualitative, not quantitative, analysis to determine the primary beneficiary. Instead of simply looking at the percentage of voting interests, the primary beneficiary could have less than the majority interests as long as it has the power to direct the activities and absorb any losses.  5.  XBRL: Starting in June 2011, all U.S. public companies are required to file financial statements to the SEC using XBRL. Note: Oracle supports XBRL reporting. 6.  Non-GAAP financial disclosures: Companies that report non-GAAP measures of performance, such as EBITDA in SEC filings, have more flexibility.  The new interpretations can be found here: http://www.sec.gov/divisions/corpfin/guidance/nongaapinterp.htm.  7.  Loss contingencies disclosures: Companies should expect additional scrutiny of their loss disclosures, such as those from litigation losses, in their annual financial statements. The SEC wants more disclosures about loss contingencies sooner instead of after the cases are settled.

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

  • Solaris 11

    - by user9154181
    Oracle has a strict policy about not discussing product features until they appear in shipping product. Now that Solaris 11 is publically available, it is time to catch up. I will be shortly posting articles on a variety of new developments in the Solaris linkers and related bits: 64-bit Archives After 40+ years of Unix, the archive file format has run out of room. The ar and link-editor (ld) commands have been enhanced to allow archives to grow past their previous 32-bit limits. Guidance The link-editor is now willing and able to tell you how to alter your link lines in order to build better objects. Stub Objects This is one of the bigger projects I've undertaken since joining the Solaris group. Stub objects are shared objects, built entirely from mapfiles, that supply the same linking interface as the real object, while containing no code or data. You can link to them, but cannot use them at runtime. It was pretty simple to add this ability to the link-editor, but the changes to the OSnet in order to apply them to building Solaris were massive. I discuss how we came to invent stub objects, how we apply them to build the OSnet in a more parallel and scalable manner, and about the follow on opportunities that have emerged from the new stub proto area we created to hold them. The elffile Utility A new standard Solaris utility, elffile is a variant of the file utility, focused exclusively on linker related files. elffile is of particular value for examining archives, as it allows you to find out what is inside them without having to first extract the archive members into temporary files. This release has been a long time coming. I joined the Solaris group in late 2005, and this will be my first FCS. From a user perspective, Solaris 11 is probably the biggest change to Solaris since Solaris 2.0. Solaris 11 polishes the ground breaking features from Solaris 10 (DTrace, FMA, ZFS, Zones), and uses them to add a powerful new packaging system, numerous other enhacements and features, along with a huge modernization effort. I'm excited to see it go out into the world. I hope you enjoy using it as much as we did creating it. Software is never done. On to the next one...

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Solaris continuera à supporter les processeurs Xeon d'Intel, son responsable dévoile les premiers éléments du prochain update

    Solaris continuera à supporter les processeurs Xeon d'Intel Le responsable de la plateforme chez Oracle dévoile les premiers éléments du prochain update De passage à Paris, le responsable de Solaris chez Oracle - Joost Pronk - a confirmé que l'OS « au coeur de la stratégie des nouveaux systèmes intégrés (Exadata, Exalogic et SPARC SuperCluster...), en partant des disques jusqu'aux applications » continuerait à être développé pour être compatible aussi bien avec SPARC qu'avec les processeurs d'Intel. « Peu importe ce que l'on vous raconte, ou ce que vous lisez ou ce que vous entendrez ailleurs, moi je vous le dis, Solaris supportera SPARC et les Xeon d'Intel », assure le port...

    Read the article

  • How to set the initial component focus

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, you use the af:document tag's initialFocusId to define the initial component focus. For this, specify the id property value of the component that you want to put the initial focus on. Identifiers are relative to the component, and must account for NamingContainers. You can use a single colon to start the search from the root, or multiple colons to move up through the NamingContainers - "::" will pop out of the component's naming container and begin the search from there, ":::" will pop out of two naming containers and begin the search from there. Alternatively you can add the naming container IDs as a prefix to the component Id, e.g. nc1:nc2:comp1. http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_document.html To set the initial focus to a component located in a page fragment that is exposed through an ADF region, keep in mind that ADF Faces regions - af:region - is a naming container too. To address an input text field with the id "it1" in an ADF region exposed by an af:region tag with the id r1, you use the following reference in af:document: <af:document id="d1" initialFocusId="r1:0:it1"> Note the "0" index in the client Id. Also, make sure the input text component has its clientComponent property set to true as otherwise no client component exist to put focus on.

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >