Search Results

Search found 703 results on 29 pages for 'ashley ray'.

Page 25/29 | < Previous Page | 21 22 23 24 25 26 27 28 29  | Next Page >

  • Ripping CD Audio simultaneously from 2 drives on one PC via USB or PATA - rip accuracy preserved?

    - by Rob
    I'm considering ripping audio (reading audio) from CDs using 2 drives simultaneously to speed up the process of ripping the CDs - i.e. 2 at a time rather than 1. Are there any issues with achieving maximum rip accuracy? In general I wondered if people have tried this and if the simultaneous streams from both rip activities would overload the host machine and cause packet loss or read retries resulting in a sub-standard CD-DA Audio CD rip? If it just means the rip is slightly slower (but still faster than sequentially doing one rip followed by another) but still of maximum accuracy then that is OK for me. I will be using dbPowerAmp to rip the CDs and converting to FLAC lossless format. Specific examples: There are 2 machines I intend to do it on: A Toshiba NB100 1.6Ghz Atom netbook, 2Gb RAM, running Windows XP Home with 1 external LG DVD/CD burner and external 1 LG Blu-ray burner attached via USB 2.0, ripping to the machine's 5400rpm internal hard drive. This rips from one CD drive very well, more than adequate, it is a nippy, fast little machine for its specification. A Desktop PC running Windows 7 Home Premium with MSI P4M900M2-L/ MS-7255v2.0 motherboard and 1.86Ghz Intel Core 2 Duo E6320, 7200rpm hard drive and 2Gb RAM, with an internal LG PATA DVD/CD burner (master) and a Philips DVD/CD burner (slave) on the same PATA bus (perhaps separate buses would be another option to consider here). Thoughts?

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • Slow boot for OS and external devices

    - by Derek Van Cuyk
    I have been having this problem intermittently but as of yesterday, it has become more consistent. It originally started when I rebooted my PC at home and the OS (Windows 8) sat in a loop appearing to do nothing while loading. I figured since this was a new installation, that something may have just become corrupted and I decided to reinstall. So I tried to boot off of the thumb drive which had the installation iso and encountered pretty much the same issue. Same with the DVD drive. So, I rebooted once again and left it to load the entire night just to see if it ever would and sure enough this morning, Windows had finally loaded. Authentication had the same roblem albeit not quite as long (took about 5 minutes to authenticate). However, once I was in, everything appeared to be working fine and as quick as normal with the exception of when I tried to scan the C drive for any errors, which ran unbearably slow (45 minutes and before I left for work and was not finished scanning a 64GB SSD drive). I mention that I have had this issue but never when loading the OS. Before it occurred when trying to install windows 7 from a different DVD drive than the one I have now. It took me about 3 hours to do it since I had to wait sometimes 30+ min for each step to finish processing. Does anyone have an idea as to what can cause this? I am assuming it is the motherboard since it is responsible for communication with all the devices I'm having issues with but I cannot find anyone else who has had a problem like this and don't want to drop more money on a MB if it isn't the problem. Hardware: Motherboard: Asus M4A78T-E Socket AM3/ AMD 790GX/ Hybrid CrossFireX Hard Drive: Kingston SSDNow V+180 64GB Micro SATA II 3GB/S 1.8 Inch Solid State Drive SVP180S2/64G Optical Drive: Samsung Blu-Ray Combo Internal 12XReadable and DVD-Writable Drive with Lightscribe SH-B123L/BSBP Thanks, Derek

    Read the article

  • AVCHD MTS h264 1080p file with choppy playback in Linux

    - by marc
    When I'm trying play video files from my camera: Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1) Input #0, mpegts, from '00027.MTS': Duration: 00:00:38.88, start: 2.884289, bitrate: 16945 kb/s Program 1 Stream #0.0[0x1011]: Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s … on my Linux computer (Ubuntu 12.04), I get choppy playback. It's completly unusable... I tried: Totem VLC mplayer The result is always same issue. I sent the same video file to a friend who has ubuntu 10.04 to test, and he also has the same issue. He has Windows 7, and confirms that on Windows, the video work well. I have an Intel® Core™2 CPU 6300 @ 1.86GHz × 2 with GF 9600 GT, with closed NVIDIA drivers. This is not any kind of issue with big files playing slow from an HDD issue. I have an SSD drive! I spent the last days and nights, trying hundreds of commands for ffmpeg, handbrake, mencoder... Any of them won't let me create a file with enough quality. I downloaded few movies from YouTube in 1080p, and playback worked well without any big pixels and choppiness. I would like have highest possible quality, I will put following files onto a Blu-ray disk so I don't need to compress them to get a smaller size. I just want smoth playback on my Linux box. On Windows, the same file is working well.

    Read the article

  • Troubleshooting wireless client-bridged networks between two DD-WRT routers?

    - by KronoS
    I recently purchased a Buffalo N600 wireless router which came with DD-WRT pre-installed. I want to take my old wireless router a Linksys WRT54GL, also with DD-WRT pre-installed, and use it as a wireless bridge for my HTPC and Blu-Ray Player in the other room. I other words, I'm trying to connect to WIRED networks via the wireless on the routers. I followed eactly the instruction from DD-WRT's manual for 'Client Bridged' however I'm still not able to connect to two routers correctly, when the encryption is enabled (WPA2-Personal Mixed) however I am able to connect the two routers when there is NO encryption. I've checked, double checked, and triple checked that EVERYTHING is the same on BOTH routers: Routers 1 & 2 Encryption: WPA2-Personal Mixed Wireless Mode: G-Only Wireless Channel: 6 Subnet Mask: 255.255.255.0 Subnet: 192.168.1.0/254 SSID: Krono$ Primary Router #1 (Buffalo N600) IP Address: 192.168.1.1 Firewall: Enables w/ defaults DCHP: Enabled as DHCP Server Secondary Router #2 (Linksys WRT54GL) IP Address: 192.168.1.2 Firewall: Disabled as per DD-WRT instructions I'm looking for any configurations that I may have missed, or settings that may need to happen in order for this work.

    Read the article

  • Why do manufacturers not show all hardware power usage?

    - by Drew
    I find it slightly more difficult to build a computer when I do not know how much power is needed for a component. When selecting a power supply for a computer, it is difficult to know how large of one to get. You don't want to go too large for cost reasons and circuit reasons, but you don't want to go too low and not be able to properly use every component. For instance, a graphics card might say "Minimum of a 500 Watt power supply. (Minimum recommended power supply with +12 Volt current rating of 30 Amps.)" But it really needs 360W (12V * 30A). So why don't they just say "Uses 360W max and xxxW peak"? Processors, I have noticed are good at reporting their power usage, but aside from processors and sometimes graphics cards, power usage is easily found. What is the power consumed by the Blu-ray / DVD drives? By the HDDs/SSDs? By the Mobo? etc. Why are these questions not easily answered when building a machine?

    Read the article

  • Important hardware components to avoid bottlenecks/improve speed on a laptop?

    - by joelhaus
    Looking for a powerful general use (including web development) laptop running Windows. Price points seem to be all over the place. Many less powerful machines are priced much higher than machines with better specs. How does one navigate this market? Are there any unpublished/under-publicized specs/bottlenecks you look for? Understanding that hardware improves over time, is there an efficient ratio that can be used (or something similar, like Windows Experience Index?) which will indicate how powerful a system is? Thanks in advance! P.S. Here is an example from a laptop released on September 17, 2010. Can anyone pick apart these specs? Is there missing information you would be looking for? OS: Win 7 Display: 16.4" LED backlit Processor: Intel Core i7-740QM, 6MB L3 Cache RAM: 6GB DDR3 1333MHz (8GB max.) Graphics: NVIDIA GeForce GT 425M (1 GB of dedicated DDR3) HDD: 500GB 7200RPM SATA Hard Drive Removable Disc: Blue-ray with DVD±R/RW Misc: webcam/mic/speakers/bluetooth (via Sony Vaio VPC-F137FX/B)

    Read the article

  • Denying access to website via htaccess based on http header

    - by neekster
    I've been trying for ages to get this to work and I can't put my finger on it. What I'm trying to do is block access to a site from a number of countries, based on the CF-IPCountry header added by CloudFlare. I figured htaccess was a suitable way to do this. We are running LiteSpeed 4.2.4 on top of DirectAdmin for a control panel. The problem we having is the htaccess rule doesn't seem to do anything. Here's the rule we tried: SetEnvIf CF-IPCountry AU UnwantedCountry=1 Order allow,deny Deny from env=UnwantedCountry Allow from all That makes no difference at all, connections are still accepted. Just to check that the rule was at least being processed, I changed Allow from all to Deny from all, and connections were refused. So it appears to be a problem wit the variable. Here's the relevant headers that come in with the request. Connection: Keep-Alive Accept-Encoding: gzip CF-Connecting-IP: xx.xx.xx.xx CF-IPCountry: AU X-Forwarded-For: xx.xx.xx.xx.xx CF-RAY: c9062956e2d04b6 X-Forwarded-Proto: http CF-Visitor: {"scheme":"http"} Zone-Name: xx.com.au Hopefully someone can help me out, this has been driving me nuts for too long. Thanks

    Read the article

  • Extending a home wireless network using two routers running tomato

    - by jalperin
    I have two Asus RT-N16 routers each flashed with Tomato (actually Tomato USB). UPSTAIRS: Router 'A' (located upstairs) is connected to the internet via the WAN port and connected via a LAN port to a 10/100/1000 switch (Switch A). Several desktops are also attached to Switch A. Router A uses IP 192.168.1.1. DOWNSTAIRS: I've just acquired Router 'B' and set it to IP 192.168.1.2. I have a cable running from Switch A downstairs to another switch (Switch B). Tivo, a blu-ray player and a Mac are connected to Switch B. My plan was to connect Router B to Switch B so that I have improved wireless access downstairs. (The wireless signal from Router A gets weak downstairs in a number of locations.) How should I configure Router B so that all devices in the house can see and talk to one another? I know that I need to change DHCP on Router B so that it doesn't cover the same range as DHCP on Router A. Should I be using WDS on the two routers, or is that unnecessary since I already have a wired connection between the two routers? Any other thoughts or suggestions? Thanks! --Jeff

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Computer makes odd noise. Replace almost every component. Computer still makes odd noise.

    - by ShimmerGeek
    My PC was getting pretty old, 5 years or so, and over the course of it's life I replaced the graphics card, HDD and a couple of sticks of RAM; but the PSU, processor, motherboard, fans etc. were all original. A few weeks ago, I started hearing an odd noise. I struggle to describe it, it sounded sortof like the 'click of death' you hear when a HDD may fail, but not quite... (And it was far less irregular) Also, I was sure I heard it once or twice a minute or two after I shut down the PC. This was going on very irregularly for a couple weeks. Some days I would hear no noise at all, others I would hear it often, maybe once every 30 seconds or so. I could find no common denominator - i.e. it did not happen more during gaming or any other intensive use. Anyway, I need my PC to sit some classes over the summer, so I put it in for them to run a HDD stress test and to replace a bunch of the components. I ended up replacing almost everything - the only elements I still have are my blu-ray drive and graphics card. They said when they started to run the HDD stress test it failed instantly (They started the test and it immediately said 'Test Complete' so they assumed it was at fault, and put a new HDD in since I was still under warranty with them.) I took it home a few hours ago, and I am still hearing the noise!!! Do you guys have any theories? I'm getting a little worried, I can't afford for my PC to suddenly fail during the next month - I have a lot of coursework to do. Any thoughts? Is it possible it could be the fan on the graphics card? I'm confused because it's so irregular. Any help would be much appreciated.

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • GWT Best Practices - MVP

    - by GWTNewbie
    A question for all the GWT gurus out there. I'm a newbie in GWT and am trying to understand the best practices of coding a GWT application. I have gone through "Large scale application development and MVP" based on Ray Ryan's talk at Google I/O 2009 and it has given me a good starting point. I downloaded the sample source code as well for the Contacts application based on the best practices listed. The application I'm trying to develop using GWT is a bit bigger (in terms of the modules involved) when compared to the sample "Contacts" application & so I want to split it up into multiple functions. I have been reading that having a single Entry point in a GWT application is a good idea, and I don't want to dump all the code in one single AppController class & one single RpcService, what would be the best approach in this situation? How would I go about dispatching the control to multiple controllers? Is there a way to achieve this using some classes in the GWT framework?

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • Google Contacts API - No Redirection

    - by mecablaze
    Hello there, I am currently working on Contact Importer web app (in PHP) so I will be able to grab email address from a user's account on Gmail, Yahoo, etc and use them for my own evil purposes. Just kidding, my web app is very friendly. I thought I would start with Google. I found they have a fantastic little API called Google Contacts API which lets a programmer, like myself, to access a user's contacts. After a couple of hours of struggling and throwing shitty code together, I ran into a few road-blocks. My main question is this: Is there any way that I can have a user provide their username and password for Gmail on my website and have my code retrieve the contacts without that nasty redirection to a Google login page? It's kind of ruins the whole flow of my web app. I've looked into AuthSub, and gotten that to work, but of course the catch is that you have to redirect the user to obtain the access token. It looks like OAuth will have this same catch. The one ray of hope I have is the ClientLogin method of authentication. Again, there is a catch, sometimes Google throws you a CAPTCHA instead of the auth token. Again, the user flow is ruined. I've noticed that our good ol' friends over at Twitter have it working just fine. Does anyone know how they do it? Thanks!

    Read the article

  • Multiple model forms with some pre-populated fields

    - by jimbocooper
    Hi! Hope somebody can help me, since I've been stuck for a while with this... I switched to another task, but now back to the fight I still can't figure out how to come out from the black hole xD The thing is as follows: Let's say I've got a product model, and then a set of Clients which have rights to submit data for the products they've been subscribed (Many to Many from Client to Product). Whenever my client is going to submit data, I need to create as many forms as products he's subscribed, and pre-populate each one of them with the "product" field as long as perform a quite simple validation (some optional fields have to be completed if it's client's first submission). I would like one form "step" for each product submission, so I've tried formWizards... but the problem is you can't pre-assign values to the forms... this can be solved afterwards when submitting, though... but not the problem that it doesn't allow validation either, so at the end of each step I can check some data before rendering next step. Then I've tried model formsets, but then there's no way to pre-populate the needed fields. I came across some django plugins, but I'm not confident yet if any of them will make it.... Did anybody has a similar problem so he can give me a ray of light? Thanks a lot in advance!! :) edit: The code I used in the formsets way is as follows: prods = Products.objects.filter(Q(start_date__lte=today) & Q(end_date__gte=today), requester=client) num = len(prods) PriceSubmissionFormSet = modelformset_factory(PriceSubmission, extra=num) formset = PriceSubmissionFormSet(queryset=PriceSubmission.objects.none())

    Read the article

  • Raycasting with tags problems in unity3d

    - by user1855858
    i need some help here. i have a part of code to search unblocked neighbour with raycast. i need to get raycast that just collide with "WP" tag. both of the iteration shown a right results, so do the dump and the raycast, the raycast does success to collide something, but when i check what the raycast collided with, there is no result shown.... anyone knows whats wrong with this code..?? int flag = 0, flahNeigh = 0; for(flag = 0; flag< wayPoints.WPList.Length; flag++) // iteration to seek neighbour nodes { for (flagNeigh = 0; flagNeigh < wayPoints.WPList.Length; flagNeigh++) { if (wayPoints.WPList[flag].loc != wayPoints.WPList[flagNeigh].loc) // dump its own node location { if (Physics.Raycast(wayPoints.WPList[flag].loc.position, wayPoints.WPList[flagNeigh].loc.position, out hitted)) // raycasting to each node else its self { if (hitted.collider.gameObject.CompareTag("WP")) // check if the ray only collide the node { print(flag + " : " + flagNeigh + " : " + wayPoints.WPList[flagNeigh].loc.position); // debugging to see whether the code works or not (the error comes) } } } } } thanks for the appreciation and answers... sorry if i have a bad english...^^

    Read the article

  • use startActivityForResult from non-activity

    - by rayman
    Hi, I have MainActivity which is an Activity and other class(which is a simple java class), we`ll call it "SimpleClass". now i want to run from that class the command startActivityForResult. now i though that i could pass that class(SimpleClass), only MainActivity's context, problem is that, u cant run context.startActivityForResult(...); so the only way making SimpleClass to use 'startActivityForResult; is to pass the reference of MainActivity as an Activity variable to the SimpleClass something like that: inside the MainActivity class i create the instance of SimpleClass this way: SimpleClass simpleClass=new SimpleClass(MainActivity.this); now this is how SimpleClass looks like: public Class SimpleClass { Activity myMainActivity; public SimpleClass(Activity mainActivity) { super(); this.myMainActivity=mainActivity; } .... } public void someMethod(...) { myMainActivity.startActivityForResult(...); } now its working, but isnt a proper way of doing this? I`am afraid i could have some memory leaks in the future. thanks. ray.

    Read the article

  • Manage multiple UDP calls

    - by rayman
    Hi all, I would like to have an advice for this issue: I am using Jbos 5.1.0, EJB3.0 I have system, which sending requests via UDP'S to remote modems, and suppose to wait for an answer from the target modem. the remote modems support only UDP calls, therefor I o design asynchronous mechanism. (also coz I want to request X modems parallel) this is what I try to do: all calls are retrieved from Data Base, then each call will be added as a message to JMS QUE. let's say i will set X MDB'S on that que, so I can work asynchronous. now each MDB will send UDP request to the IP-address(remote modem) which will be parsed from the que message. so basicly each MDB, which takes a message is sending a udp request to the remote modem and [b]waiting [/b]for an answer from that modem. [u]now here is the BUG:[/u] could happen a scenario where MDB will get an answer, but not from the right modem( which it requested in first place). that bad scenario cause two wrong things: a. the sender which sent the message will wait forever since the message never returned to him(it got accepted by another MDB). b. the MDB which received the message is not the right one, and probablly if it was on a "listener" mode, then it supposed to wait for an answer from diffrent sender.(else it wouldnt get any messages) so ofcourse I can handle everything with a RETRY mechanisem. so both mdb's(the one who got message from the wrong sender, and the one who never got the answer) will try again, to do thire operation with a hope that next time it will success. This is the mechanism, mybe you could tell me if there is any design pattren, or any other effective solution for this problem? Thanks, ray.

    Read the article

  • Why do my raytraced spheres have dark lines when lit with multiple light sources?

    - by Curyous
    I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'. The code where I'm adding the lighting effects is: // trace lights for ( int l=0; l<primitives.count; l++) { Primitive* p = [primitives objectAtIndex:l]; if (p.light) { Sphere * lightSource = (Sphere *)p; // calculate diffuse shading Vector3 *light = [[Vector3 alloc] init]; light.x = lightSource.centre.x - intersectionPoint.x; light.y = lightSource.centre.y - intersectionPoint.y; light.z = lightSource.centre.z - intersectionPoint.z; [light normalize]; Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain]; if (primitiveThatWasHit.material.diffuse > 0) { float illumination = DOT(normal, light); if (illumination > 0) { float diff = illumination * primitiveThatWasHit.material.diffuse; // add diffuse component to ray color colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red; colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue; colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green; } } [normal release]; [light release]; } } How can I make it look right?

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • Why would ASP.NET MVC use session state?

    - by ray247
    Recommended by the ASP.NET team to use cache instead of session, we stopped using session from working with the WebForm model the last few years. So we normally have the session turned off in the web.config <sessionState mode="Off" /> But, now when I'm testing out a ASP.NET MVC application with this setting it throw an error in class SessionStateTempDataProvider inside the mvc framework, it asked me to turn on session state, I did and it worked. Looking at the source it uses session Dictionary<string, object> tempDataDictionary = httpContext.Session[TempDataSessionStateKey] as Dictionary<string, object>; // line 20 in SessionStateTempDataProvider.cs So, why would they use session here? What am I missing? Thanks, Ray. ======================================================== Edit Sorry didn't mean for this post to debate on session vs. cache, but rather in the context of the ASP.NET MVC, I was just wondering why session is used here. In this Scott Watermasysk blog post he mentioned on turning off session too as a good practice, so I'm just wondering do I have to turn it on to use MVC from here on?

    Read the article

  • Return value from Object match

    - by Hito_kun
    I'm, by no means, JS fluent, so forgive me if im asking for some really basic stuff, but I've not being able to find a proper answer to my question. Im writting my first Node.js (plus Extra Framework and Socket.io) app and Im having some fun setting up the server side of a FB-like messenger (surprise!!!). So, let's say I have this data structure to store online users(This is a JSON Array, but I'm not sure it is the best way to do it or should I go with Javascript Objects): [ { "site": 45, "users": [ { "idUser": 5, "idSocket": "qwe87r7w8qwe", "name": "Carlos Ray Norris" }, { "idUser": 6, "idSocket": "v8d9d0fgfs7d", "name": "John Connor" } ] }, { "site": 48, "users": [ { "idUser": 22, "idSocket": "qwe87r7w8qwe", "name": "David Bowie" }, { "idUser": 23, "idSocket": "v8d9d0fgfs7d", "name": "Barack H. Obama" } ] } ] What I want to do is to search in the array for x value given y. In this case, retrieving the idSocket knowing the idUser WITHOUT having to run through the array values. So I have basically 2 questions: first, what would be the proper way to store users online? and secondly, how to find values matching with the values I already know (find the idSocket that has a given idUser). I would like a pure JS approach(or using some of the tools given by Node, Socket.io or Express), but if that's not possible then I can look for some JQuery.

    Read the article

  • delete multi-line block of text with internal flag in povray file

    - by Sibo Lin
    I have a pov-ray file, which defines a lot of cylinders and spheres. Sometimes these shapes are defined to have "color@", which makes the povray unrenderable. One solution I've found is to delete the offending cylinders and spheres. So a file that contains this text cylinder { < -0.17623, 0.24511, -0.27947>, < -0.15220, 0.22658, -0.26472>, 0.00716 texture { colorO } } sphere { < -0.00950, 0.00357, 0.00227>, 0.00716 texture { color@ } } cylinder { < -0.00950, 0.00357, 0.00227>, < 0.00327, 0.00169, 0.00108>, 0.00716 texture { color@ } } sphere { < 0.15373, 0.00601, 0.18223>, 0.00716 texture { colorO } } would turn into this text cylinder { < -0.17623, 0.24511, -0.27947>, < -0.15220, 0.22658, -0.26472>, 0.00716 texture { colorO } } sphere { < 0.15373, 0.00601, 0.18223>, 0.00716 texture { colorO } } Is there some way to do this replacement with a shell script? Preferably in tcsh. Thanks!

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29  | Next Page >