Search Results

Search found 6031 results on 242 pages for 'imaginary numbers'.

Page 183/242 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • ATG Live Webcast Nov. 29th: Endeca "Evolutionizes" E-Business Suite

    - by Bill Sawyer
    If you have ever wanted any of the following within Oracle E-Business Suite: Complete Data View Advanced Searching Across Organizations and Flexfields Advanced Visualization including Charts, Metrics, and Cross Tabs Guided Navigation Then you might want to attend this webcast to learn more about Oracle Endeca's integration with Oracle E-Business Suite. Oracle Endeca includes an unstructured data correlation and analytics engine, together with catalog search and guided navigation capabilities. This webcasts focuses on the details behind Oracle Endeca's integration with Oracle E-Business Suite. It demonstrates how you can extend the use of Oracle Endeca into other areas of Oracle E-Business Suite. Date:             Thursday, November 29, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Osama Elkady, Senior DirectorWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103192To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595335921If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Lenovo W520 back usb port not working

    - by jaudette
    The usb port in the back of my laptop (on the right side when viewed from using perspective) is not working. Does anybody know if we can get this port working, and what the port number is. Here is my lsusb, if it can help. % lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse Bus 003 Device 003: ID 046d:c318 Logitech, Inc. Illuminated Keyboard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0765:5001 X-Rite, Inc. Huey PRO Colorimeter Bus 001 Device 004: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 005: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 006: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP) Bus 002 Device 003: ID 17ef:1003 Lenovo Integrated Smart Card Reader I am running 12.10, upgraded from 12.04 but it did not work either in 12.04. The two usb ports on the left work just fine. EDIT: I just updated my bios from 1.32 to 1.39, no change in behaviour. The port does not even power up my devices. EDIT 2: Booted up windows, and the port is working. I went into the device manager and looked at the USB settings. I found my USB drive on Port 2, Hub 3, i just don't know how that relates to the Bus and Device numbers of linux. In windows, Smart card reader was on USB hub located at port 1 hub 2, fingerprint and bluetooth were on USB hub located at port 1 hub 1 EDIT 3: Went and looked at this SO post. Tried to look at my kern.log file with tail -f /var/log/kern.log. Got some activity when plugging/removing devices in other ports, but nothing happens when connecting a device into that port. It really looks disabled. Looked at my usb1-4 sys/bus/usb/devices/usb1/power/control and they are all set on auto. As expected, usb4 have version 3.00, the others (usb1-usb3) are 2.00.

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • ATG Live Webcast Dec. 6th: Minimizing EBS Maintenance Downtimes

    - by Bill Sawyer
    This webcast provides an overview of the plans and decisions you can make, and the actions you can take, that will help you minimize maintenance downtimes for your E-Business Suite instances. It is targeted to system administrators, DBAs, developers, and implementers. This session, led by Elke Phelps, Senior Principal Product Manager, and Santiago Bastidas, Principal Product Manager, will cover best practices, tools, utilities, and tasks to minimize your maintenance downtimes during the four key maintenance phases. Topics will include: Pre-Patching: Reviewing the list of patches and analyzing their impact Patching Trials: Testing the patch prior to actual production deployment Patch Deployment: Applying patching to your system Post Patching Analysis: Validating the patch application Date:                Thursday, December 6, 2012Time:               8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Elke Phelps, Senior Principal Product Manager                         Santiago Bastidas, Principal Product Manager Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103200To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595757500 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • IPS Package Groups

    - by Alan_Solaris_RE
    IPS group packages consist solely of dependencies on other packages that make up a logical grouping of software. These are similar to, but not the equivalent of, Solaris 10 metaclusters. The main difference is that metaclusters are nested subsets ranging from a minimal install to nearly all packages on the media. Group packages have no such hierarchy. They can overlap other groups, or be completely disjoint sets. A group dependency is set this way in an IPS package manifest file: depend fmri=full/pkg/name type=group Current Solaris Groups Solaris currently has 4 system groups defined. These are used for different types of installation, and are included in the xml manifest files used by the various Solaris installers: Package Name Summary Description Default Installation For:  group/system/solaris-desktop Oracle Solaris Desktop Provides an Oracle Solaris desktop environment Live Media  group/system/solaris-large-server Oracle Solaris Large Server Provides an Oracle Solaris large server environment Text Installer  group/system/solaris-small-server Oracle Solaris Small Server Provides a useful command-line Oracle Solaris environment  Zones  group/system/solaris-auto-install  Oracle Solaris Automated Installer Client  Provides an Oracle Solaris Automated Installer client  Automated Installer There are also several "feature" groups such as AMP and GNU Developer Tools. These are provided for convenience, but are not used directly by any installers. Retrieving Group Package Information A listing of all current groups can be found with the command: pkg info -r group/* A listing of all the packages in a group can be obtained with: pkg contents -o fmri -H -rt depend -a type=group groupname An example: $ pkg contents -o fmri -H -rt depend -a type=group solaris-desktop archiver/gnu-tar audio/audio-utilities codec/flac codec/libtheora codec/ogg-vorbis codec/speex communication/im/pidgin etc. You can determine which package group is currently installed on your system: $ pkg list group/system/\* Output would look like: NAME (PUBLISHER) VERSION IFO group/system/solaris-desktop 0.5.11-0.175.0.0.0.0.0 i-- Note that there are not version numbers associated with a group package dependency. The package version that best fits the system will be used, based on other dependencies such as what is listed in incorporation files. Installing a Group To Install a group, simple use the group package name as you would any other package: $ pkg install solaris-small-server  If you want to exclude a package from installing, you can use the --reject flag: $ pkg install --reject audio/audio-utilities solaris-desktop Creating Your Own Group To create your own group package, you can follow the pkg(5) documentation on how to create a package, and use this action for each package that is part of your group:   depend fmri=full/pkg/name type=group

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • ATG Live Webcast Dec. 13th: EBS Future Directions: Deployment and System Administration

    - by Bill Sawyer
    This webcast provides an overview of the improvements to Oracle E-Business Suite deployment and system administration that are planned for the upcoming EBS 12.2 release.   It is targeted to system administrators, DBAs, developers, and implementers. This webcast, led by Max Arderius, Manager Applications Technology Group, compares existing deployment and system administration tools for EBS 12.0 and 12.1 with the upcoming functionality planned for EBS 12.2. This was a very popular session at OpenWorld 2012, and I am pleased to bring it to the ATG Live Webcast series.  This session will cover: Understanding the Oracle E-Business Suite 12.2 Architecture Installing & Upgrading EBS 12.2 Online Patching in EBS 12.2 Cloning in EBS 12.2 Date:             Thursday, December 13, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Max Arderius, Manager Applications Technology Group Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103194To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  593672805If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • What does path finding in internet routing do and how is it different from A*?

    - by alan2here
    Note: If you don't understand this question then feel free to ask clarification in the comments instead of voting down, it might be that this question needs some more work at the moment. I've been directed here from the Stack Excange chat room Root Access because my question didn't fit on Super User. In many aspects path finding algorithms like A star are very similar to internet routing. For example: A node in an A* path finding system can search for a path though edges between other nodes. A router that's part of the internet can search for a route though cables between other routers. In the case of A*, open and closed lists are kept by the system as a whole, sepratly from any individual node as well as each node being able to temporarily store a state involving several numbers. Routers on the internet seem to have remarkable properties, as I understand it: They are very performant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, there's never any doubling back for example. Similar IP addresses don't have to be geographically nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router doesn't have to store a list of all the addresses each of it's directions leads most directly to. I'm looking for a basic, general, high level description of the algorithms workings from the point of view of an individual router. Does anyone have one? I presume public internet routers don't use A* as the overheads would be to large, and scale to poorly. I also presume there is a single method worldwide because it seems as if must involve a lot of transferring data to update and communicate a reasonable amount of state between neighboring routers. For example, perhaps the amount of data that needs to be stored in each router scales logarithmically with the number of routers that exist worldwide, the detail and reliability of the routing is reduced over increasing distances, there is increasing backtracking involved in parts of the network that are less geographically uniform or maybe each router really does perform an A* style search, temporarily maintaining open and closed lists when a packet arrives.

    Read the article

  • PASS: FY10 Actuals Posted

    - by Bill Graziano
    Earlier this year we published preliminary fiscal year 2010 financials to the Governance page on the PASS web site.  Please remember that FY10 runs from July 1st, 2009 through June 30th, 2010 and includes the November 2009 Summit.  We do our fiscal year this way so that the Summit falls earlier in the fiscal year.  The financials we had posted were P&L numbers at the portfolio level.  Prior to this we had posted our detailed budget but only posted the auditors report at the end of each year.  Today we updated our published financials to include: Pre-audit actuals from FY10 at the same level as our budget.  The document has both actuals and budget for FY10 side by side.  This is over 20 pages of detailed financial information covering hundreds of line-items. A letter describing key differences between our budget and actuals.  I walked through each line item where the difference was greater than $25,000 and explained what happened and why. We updated the financial graph going back to 2003 to include FY10. This update should “close the loop” on our financials.  You can now start with the published budget and compare it to the finished financials at the same level of detail.  We also plan to publish the auditor’s report when that is completed -- as we do every year. Overall I’m very happy with how FY10 turned out.  Keep in mind that this was the November 2009 Summit so we were still facing economic challenges.  With all that we were roughly break-even showing a $15,000 profit on $3.9 million of revenue.  I didn’t find anything shocking in reviewing our actual vs. budget but there were a few things that needed explanation.  You can see those in the letter on the governance page. Please keep in mind that these are the actuals from our operating financials.  The auditor may have us make adjustments for depreciation or other financial transactions.  We may also account for certain transactions differently for tax purposes than we do for financial reporting purposes.  I feel these financial statements give you the clearest picture of how our organization spends its money. We were late publishing these this year.  We were working through some tax issues and that delayed our ability to file our final tax forms which delayed this process.  In hindsight I should have published these documents as soon as we had them and not waited for the tax issues.  We’ll do this better in the future. And on a final note, you don’t need to login to view these documents.  If you have any questions you can post them here.  If we get more than a few questions we may see about creating some forums for financial issues on the PASS web site.

    Read the article

  • More elegant way to avoid hard coding the format of a a CSV file?

    - by dsollen
    I know this is trivial issue, but I just feel this can be more elegant. So I need to write/read data files for my program, lets say they are CSV for now. I can implement the format as I see fit, but I may have need to change that format later. The simply thing to do is something like out.write(For.getValue()+","+bar.getMinValue()+","+fi.toString()); This is easy to write, but obviously is guilty of hard coding and the general 'magic number' issue. The format is hard-coded, requires parsing of the code to figure out the file format, and changing the format requires changing multiple methods. I could instead have my constants specifying the location that I want each variable to be saved in the CSV file to remove some of the 'magic numbers'; then save/load into the an array at the location specified by the constants: int FOO_LOCATION=0; int BAR_MIN_VAL_LOCATION=1; int FI_LOCATION=2 int NUM_ARGUMENTS=3; String[] outputArguments=new String[NUM_ARGUMENTS]; outputArguments[FOO_LOCATION] = foo.getValue(); outputArgumetns[BAR_MIN_VAL_LOCATION] = bar.getMinValue(); outptArguments[FI_LOCATOIN==fi.toString(); writeAsCSV(outputArguments); But this is...extremely verbose and still a bit ugly. It makes it easy to see the format of existing CSV and to swap the location of variables within the file easily. However, if I decide to add an extra value to the csv I need to not only add a new constant, but also modify the read and write methods to add the logic that actually saves/reads the argument from the array; I still have to hunt down every method using these variables and change them by hand! If I use Java enums I can clean this up slightly, but the real issue is still present. Short of some sort of functional programming (and java's inner classes are too ugly to be considered functional) I still have no obvious way of clearly expressing what variable is associated with each constant short of writing (and maintaining) it in the read/write methods. For instance I still need to write somewhere that the FOO_LOCATION specifies the location of foo.getValue(). It seems as if there should be a prettier, easier to maintain, manner for approaching this? Incidentally, I'm working in java at the moment, however, I am interested conceptually about the design approach regardless of language. Some library in java that does all the work for me is definitely welcome (though it may prove more hassle to get permission to add it to the codebase then to just write something by hand quickly), but what I'm really asking is more about how to write elegant code if you had to do this by hand.

    Read the article

  • In-Store Tracking Gets a Little Harder

    - by David Dorf
    Remember how Nordstrom was tracking shopper movements within their stores using the unique number, called a MAC, emitted by the WiFi radio in smartphones?  The phones didn't need to connect to the network, only have their WiFi enabled, as most people do by default.  They did this, presumably, to track shoppers' path to purchase and better understand traffic patterns.  Although there were signs explaining this at the entrances, people didn't like the notion of being tracked.  (Nevermind that there are cameras in the ceiling watching them.)  Nordstrom stopped the program. To address this concern the Future of Privacy, a Washington think tank, created Smart Store Privacy, a do-not-track service that allows consumers to register their MAC address in much the same way people register their phone numbers in the national do-not-call list.  A group of companies agreed to respect consumers' wishes and ignore smartphones listed in the database.  The database includes Bluetooth identifiers as well.  Of course you could simply turn your bluetooth and WiFi off when shopping as well. Most know that Apple prefers to use BLE beacons to contact and track smartphones within their stores.  This feature extends the typical online experience to also work in physical stores.  By identifying themselves, shoppers can expect a more tailored shopping experience much like what we've come to expect from Amazon's website, with product recommendations and offers that are (usually) relevant. But the upcoming release of iOS8 is purported to have a new feature that randomizes the WiFi MAC address of smartphones during the "probing" phase.  That is, before connecting to the WiFi network, a random MAC number is used so as to keep the smartphone's real MAC address secret.  Unless you actually connect to the store's WiFi, they won't recognize the MAC address. The details on this are still sketchy, but if the random MAC is consistent for a short period, retailers will still be able to track movements anonymously, but they won't recognize repeat visitors.  That may be sufficient for traffic analytics, but it will stymie target marketing.  In the case of marketing, using iBeacons with opt-in permission from consumers will be the way forward. There is always a battle between utility and privacy, so I expect many more changes in this area.  Incidentally, if you'd like to see where beacons are being used this site tracks them around the world.

    Read the article

  • Fans running very fast on MacBook Pro 8.1 ubuntu 12.04

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • ATG Live Webcast Nov. 8th: Advanced Management of EBS with Oracle Enterprise Manager

    - by Bill Sawyer
    The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. The Application Management Pack plug-in is part of Oracle Enterprise Manager 12c Application Management Suite for Oracle E-Business Suite. The Application Management Pack plug-in is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility, diagnostic capability, and administrator productivity. This webcast will highlight the key features and benefits of Oracle Enterprise Manager, the latest version of the Oracle Application Management Suite for Oracle E-Business Suite. Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Date:                Thursday, November 8, 2012Time:               8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Angelo Rosado, Principal Product Manager, E-Business Suite ATG                         Lauren Cohn, Principal Curriculum Developer, E-Business Suite ATGWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103191To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591460967 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Fans running very fast on MacBook Pro 8.1

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • Working for Web using open-source Technologies

    - by anirudha
    As a Web Developer we all have own dream to make a great web application. a great application was built upon high discipline and best practice on the process of development then we can make modification easier in future as if we want. the user feedback also have a matter because they tell us what they want or expected with the application we make day and night. sometime they report a nice story , experience or a problem they got with our application. so that's matter because they telling about our application much more because they use our software and a part of process of future development or next version of application we make. so the Web have a good thing that they updated as soon as possible. in desktop application their is a numbers of trouble client have when they want to use our application. first thing that installation of software never goes right on every system. big company spent a big amount of money to troble these problem the user have with their software.   Web application is nice implementation of application because their is no trouble with installation all have same experience and if something goes wrong patch come soon and no waiting for new version. Chrome even a desktop application [browser] but they automatically update themselves so their is no trouble for user to get next version now hasseles.    Web application development in Microsoft way have their own rule , pattern practice to make better application in less time. the technologies i want to show you here is some great opensource example like MySQL jQuery and ASP.NET MVC a framework based on ASP.NET server side language.   For going to next step we need to show you a list of software you need to have to fully experience this tutorial.   Visual Web Developer 2010 Express Edition  MySQL [open-source RDBMS]   Query [open-source javascript library]   for getting these software you need to pay nothing.   Visual Web Developer can obtained from Microsoft.com/Express or if you are student or Web Developer you are eligible to get the Visual studio professional and many other great software from Microsoft through their Dreamspark or WebsiteSpark programmes.   MySQL is a great Relational Database management software who are freely available from MySQL.com as a database monitorting tool you can use MySQL workbrench who can be freely get from MySQL official website or many other free tool are available for begining development with MySQL   jQuery is a great library for making javascipt development easier and faster.you can obtained jQuery from jQuery.com their official website.

    Read the article

  • Determining whether a visitor reached two different pages in one visit

    - by Shaun
    I have a funnel that I would like to track. Tracking this funnel won't work with the default "goal funnel" tracking in Google due to the fact that I am mixing events and pageviews. As such, I've created a series of reports: Visits to demo pages - An inclusion filter on "Page". Triggers an Event on these pages - An inclusion filter on "Page" and "Event Category". Does not bounce - An inclusion filter on "Page" and an exclusion filter on "Exit Page" for these same pages. Reach our storefront - ?? Purchase something - An inclusion filter on "Page" and a report that shows "Transactions". At a basic level, I need to track users who reached demo pages, then reached any page on our store. Intuitively, I created a segment, used two inclusive "Page" filters (one for the demo pages and one for any page in our store), and combined them with an "AND" operator. I thought this was working until I tried to do the same thing in a dashboard widget and on a custom report. When I tried the same thing in those areas, I got zero results. I figured this might be because widgets and custom report filters function differently from segment filters (the options are different for all of them), so I tried applying my "demo page && store page" segment to a report that gave me a general page list. All I saw was a list of the specific pages. I tried simplifying things by creating a custom report that showed all visits to store pages, then applied a segment that filtered for users who visited demo pages. This got me the same numbers as my "demo page && store page" segment, but showed a list of demo pages. This has led me to believe that the "demo page && store page segment" approach and the "demo segment && store report" functionally behave the same. However, this experience has left me questioning whether they're giving me what I want. Are these methods showing me all users who reached both sets of pages? Is there a better/easier/more standard way of doing this aside from looking at visitor flow reports? I'm trying to avoid a combination of custom variables/events and using the horizontal funnel approach since it would consume a large number of our limited goals and seems more complicated than is necessary for tracking this funnel.

    Read the article

  • 2012 EC Election Ballot open; Meet the Candidates Call tomorrow

    - by heathervc
    The JCP Executive Committee (EC) Election ballot is now open and all of the candidates' nominations materials are now available on JCP.org -- note that two new candidates were nominated late last week:  Liferay and North Sixty-One. It is shaping up to be an exciting election this year! The ratified candidates are:  Cinterion, Credit Suisse, Fujitsu and HP.The elected candidates are (9 candidates, 2 open seats):  Cisco Systems, CloudBees, Giuseppe Dell'Abate, Liferay, London Java Community, MoroccoJUG, North Sixty-One, Software AG, and Zero Turnaround. Tomorrow, 18 October, we will hold an open teleconference for the Java Community to meet the candidates and ask questions regarding their nomination.  We hope you will be able to participate in the call.  Should the time be inconvenient, a recording will be made available for download, and candidate questions may be posted on this blog entry or sent to [email protected]. Topic: Meet the EC Candidates Date: Thursday, October 18, 2012 Time: 9:30 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 818 225 Meeting Password: MeetEC ------------------------------------------------------- To join the online meeting (Now from mobile devices) ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&RT=MiM0 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: MeetEC 4. Click "Join". To view in other time zones or languages, please click the link: https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&ORT=MiM0 ------------------------------------------------------- To join the audio conference only -------------------------------------------------------     +1 (866) 682-4770     Outside the US: global access numbers  https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=6279803 or +1 (408) 774-4073     Conference code: 9454597     Security code: JCPEC (52732)------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/mc 2. On the left navigation bar, click "Support".

    Read the article

  • Making CopySourceAsHtml add-on work with VS2010

    - by DigiMortal
    As there are still bloggers who use CopySourceAsHtml add-on for Visual Studio to get syntax highlighted code to their blog posts and there is no guidance in CSAH site how to make it work with Visual Studio 2010 I will give my guidance here. Almost all code in this blog is syntax highlighted by this add-on (read more from my post Visual Studio add-in: CopySourceAsHTML). Last version of CSAH is available for VS2008 but it is easy to make it work with VS2010. Just follow these steps. Close VS2010 if it is opened. Goto folder MyDocuments\Visual Studio 2010. Move to AddIns subfolder (create it if there is no such subfolder). Create file called CopySourceAsHtml.AddIn and open it in text editor. Paste the following XML to editor:   <?xml version="1.0" encoding="utf-8" standalone="no"?> <Extensibility xmlns="http://schemas.microsoft.com/AutomationExtensibility"> <HostApplication> <Name>Microsoft Visual Studio Macros</Name> <Version>10.0</Version> </HostApplication> <HostApplication> <Name>Microsoft Visual Studio</Name> <Version>10.0</Version> </HostApplication> <Addin> <FriendlyName>CopySourceAsHtml</FriendlyName> <Description>Adds support to Microsoft Visual Studio 2010 for copying source code, syntax highlighting, and line numbers as HTML.</Description> <Assembly>JTLeigh.Tools.Development.CopySourceAsHtml, Version=3.0.3215.1, Culture=neutral, PublicKeyToken=bb2a58bdc03d2e14, processorArchitecture=MSIL</Assembly> <FullClassName>JTLeigh.Tools.Development.CopySourceAsHtml.Connect</FullClassName> <LoadBehavior>1</LoadBehavior> <CommandPreload>0</CommandPreload> <CommandLineSafe>0</CommandLineSafe> </Addin> </Extensibility> Save file and close it. Run VS2010 and activate add-on if it is not activated yet. That’s it. If you are heavy user of CSAH then I recommend you to bookmark this post. :)

    Read the article

  • AS3 - At exactly 23 empty alpha channels, images below stop drawing

    - by user46851
    I noticed, while trying to draw large numbers of circles, that occasionally, there would be some kind of visual bug where some circles wouldn't draw properly. Well, I narrowed it down, and have noticed that if there is 23 or more objects with 00 for an alpha value on the same spot, then the objects below don't draw. It appears to be on a pixel-by-pixel basis, since parts of the image still draw. Originally, this problem was noticed with a class that inherited Sprite. It was confirmed to also be a problem with Sprites, and now Bitmaps, too. If anyone can find a lower-level class than Bitmap which doesn't have this problem, please speak up so we can try to find the origin of the problem. I prepared a small test class that demonstrates what I mean. You can change the integer value at line 20 in order to see the three tests I came up with to clearly show the problem. Is there any kind of workaround, or is this just a limit that I have to work with? Has anyone experienced this before? Is it possible I'm doing something wrong, despite the bare-bones implementation? package { import flash.display.Sprite; import flash.events.Event; import flash.display.Bitmap; import flash.display.BitmapData; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point Test(3); } private function Test(testInt:int):void { if(testInt==1){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var i:int = 0; i < 22; i++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==2){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var j:int = 0; j < 23; j++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==3){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var k:int = 0; k < 22; k++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } var M:Bitmap = new Bitmap(new BitmapData(100, 100, true, 0x00000000)); M.x += 50; M.y += 50; addChild(M); } } } }

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >