Search Results

Search found 970 results on 39 pages for 'noise reduction'.

Page 13/39 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Problems reviving old pc including graphics card issue.

    - by Mick
    I have a PC that seemed to have died years ago that I am trying to revive. It has a dual core athlon processor and a gigabyte motherboard. It had two dual output graphics cards, and I have long since forgotten which output would print out the diagnostic information as the PC starts up. Also I suspect that the resolution set on all the monitors was probably higher than my current single monitor is capable of displaying. The motherboard also has a built in graphics card, so I thought it may be simplest to remove both the graphics cards and plug my monitor into the onboard graphics just while I get things going. Does that seem sensible? Now the other problem: The PC has two hard drives. I have no idea which one is the primary one it is attempting to boot from. When I power up, the fan comes on and I hear some chuga-chuga-pause chuga-chuga-pause repeat indefinitely. I'm not sure which device is making the noise. There are no-beeps at any time. I see nothing on the screen at any time, not even for a second. Any suggestions? EDIT: If T start up the PC without the power connected to the CDrom there is no chuga-chugan noise.

    Read the article

  • Microphone array support in Windows. Info on performance and compatible hardware?

    - by exinocactus
    It is officially claimed by Microsoft (Audio Device Technologies for Windows), that Windows Vista has an integrated system-level support of microphone arrays for improved sound capturing by isolating a sound source in target direction and rejecting ambient noise and reverberation. In more technical terms, an implementation of an adaptive beamformer. Theoretically, microphone arrays with 2-4 mics can substantially improve SNR under some conditions like speaker in front of the laptop in noisy environment (airport, cafe). Surprisingly, though, I find very little information about commercially-available products supporting these new features. I mean products like portable usb micropone arrays or laptops or flat screens with integrated mic arrays. I could only find info about two laptop models having "noise cancelling digital array microphone". These are Dell Latitude and Eee PC 1008P-KR. Now my questions: Do you have any experience with the Windows beamformer implementation? For instance, in the above mentioned laptops. How well does it work? Are there any tests results available in the net or in print (papers?)? Do you know about other microphone array hardware? What could be the reason why mic array technology didn't get sucess Is there mic arrays support in 'Windows 7'?

    Read the article

  • Problem with Amiga 1200 accelerator board

    - by cc0
    I just recently walked past a dump, where in the corner of my eye I spotted something that looked like a huge keyboard. I went to take a closer look, and found out that it was an Amiga 1200 with a 030 accellerator board and scala dongle. Jackpot! So anyway; I dried it, cleaned it, it works, but the floppy was not powering on and same with the harddrive. I am using an old Amiga 1200 PSU that was making some strange high pitch noise when I tried to boot the amiga with the harddrive installed in it. I removed the harddrive and it booted fine with the PSU not emitting any detectable noise. However, when I have the 030 installed it sometimes reboots and shows a red "Software Error" screen. I tried removing the memory on the board, same effect. Sometimes it does not boot at all, just gives a black screen. Someone suggested the card had problems with 3.1 roms, but this amiga has only 3.0 roms installed. Does anyone have any apparent theories as to why it seems unstable? I don't have any other Amiga parts to cross-swap with to test a lot of things, so I'd really appreciate some sound input here so I'd know what to look for in order to try fix it. And merry Christmas everyone :]

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • how to make this in python

    - by user2980882
    The number reduction game Rules of the game: ? The first player to write a 0 wins. ? To start the game, Player 1 picks any whole number greater than 1, say 18. ? The players take turns reducing the number by either: o Subtracting 1 from the number his/her opponent just wrote, OR o Halving the number his/her opponent just wrote, rounding down if necessary. Write a Python program that lets two players play the number reduction game. Your program should: 1. Ask Player 1 to enter the starting number. 2. Use a while-loop to allow the players to take turns reducing the number until someone wins. 3. Each time a player enters a positive number (not 0), inform the other player what his/her choices are and ask him/her to enter the next number. 4. Declare the winner when someone enters 0. Example session: Player 1, enter a number greater than 1: 16 Player 2, your choices are 15 or 8: 15 Player 1, your choices are 14 or 7: 7 Player 2, your choices are 6 or 3:3 Player 1, your choices are 2 or 1:2 Player 2, your choices are 1 or 1:1 Player 1, your choices are 0 or 0:0 Player 1 wins

    Read the article

  • How do I "link narrations" in PowerPoint 2010 Beta?

    - by Zack Peterson
    I can record narration with PowerPoint 2010, but it seems that it will only embed it into the presentation file. Older versions of PowerPoint allowed the audio to be saved as external sound files. I'd like to perform noise-reduction and minor editing outside of PowerPoint. Has Microsoft removed the "link narrations" option from PowerPoint 2010?

    Read the article

  • Does SDHC have any write error recovery ?

    - by marc
    What happen if SDHC card get write error (damaged cell / bad sector) ? Whole card is unusable (to trash, all data written to that sector now and in future will be lost) ? or rewrite sector (flash memory get corrupted when writing so maybe have any function to check if sector was written successfully) to another and mark as fault as unusable what will be seen as reduction of capacity but no data lost. I have to do some research about SD card-s on disk less machines. regards

    Read the article

  • Does SDHC have any write (ECC) error recovery ?

    - by marc
    What happen if SDHC card get write error (damaged cell / bad sector) ? Whole card is unusable (to trash, all data written to that sector now and in future will be lost) ? or rewrite sector (flash memory get corrupted when writing so maybe have any function to check if sector was written successfully) to another and mark as fault as unusable what will be seen as reduction of capacity but no data lost. I have to do some research about SD card-s on disk less machines. regards

    Read the article

  • Do newer physical interfaces make a better linux firewall?

    - by pfyon
    At work we use an old (10 year old) linux box with 4 interfaces to act as router/firewall for the network. There's never really been a need to change it since it's stable and handles all our needs. I'm wondering, though, would replacing the network interfaces with newer ones provide a benefit? Besides the obvious bandwidth increase (eg. 100MBit to GBit), would there be a latency reduction, or do newer cards pretty much do the same thing as old ones?

    Read the article

  • Social Media Talk: Facebook, Really?? How Has It Become This Popular??

    - by david.talamelli
    If you have read some of my previous posts over the past few years either here or on my personal blog David's Journal on Tap you will know I am a Social Media enthusiast. I use various social media sites everday in both my work and personal life. I was surprised to read today on Mashable.com that Facebook now Commands 41% of Social Media Trafic. When I think of the Social Media sites I use most, the sites that jump into my mind first are LinkedIn, Blogging and Twitter. I do use Facebook in both work and in my personal life but on the list of sites I use it probably ranks closer to the bottom of the list rather than the top. I know Facebook is engrained in everything these days - but really I am not a huge Facebook fan - and I am finding that over the past 3-6 months my interest in Facebook is going down rather than up. From a work perspective - SM sites let me connect with candidates and communities and they help me talk about the things that I am doing here at Oracle. From a personal perspective SM sites let me keep in touch with friends and family both here and overseas in a really simple and easy way. Sites like LinkedIn give me a great way to proactively talk to both active and passive candidates. Twitter is fantastic to keep in touch with industry trends and keep up to date on the latest trending topics as well as follow conversations about whatever keyword you want to follow. Blogging lets me share my thoughts and ideas with others and while FB does have some great benefits I don't think the benefits outweigh the negatives of using FB. I use TweetDeck to keep track of my twitter feeds, the latest LinkedIn updates and Facebook updates. Tweetdeck is a great tool as it consolidates these 3 SM sites for me and I can quickly scan to see the latest news on any of them. From what I have seen from Facebook it looks like 70%-80% of people are using FB to grow their farm on farmville, start a mafia war on mafiawars or read their horoscope, check their love percentage, etc...... In between all these "updates" every now and again you do see a real update from someone who actually has something to say but there is so much "white noise" on FB from all the games and apps that is hard to see the real messages from all the 'games' information. I don't like having to scroll through what seems likes pages of farmville updates only to get one real piece of information. For me this is where FB's value really drops off. While I use SM everyday I try to use SM effectively. Sifting through so much noise is not effective and really I am not all that interested in Farmville, MafiaWars or any similar game/app. But what about Groups and Facebook Ads?? Groups are ok, but I am not sure I would call them SM game changers - yes there is a group for everything out there, but a group whether it is on FB or not is only as good as the community that supports and participates in it. Many of the Groups on FB (and elsewhere) are set up and never used or promoted by the moderator. I have heard that FB ads do have an impact, and I have not really looked at them - the question of cost jumps and return on investment comes to my mind though. FB does have some benefits, it is a great way to keep in touch with people and a great way to talk to others. I think it would have been interesting to see a different statistic measuring how effective that 41% of Social Media Traffic via FB really is or is it just a case of more people jumping online to play games. To me FB does not equal SM effectiveness, at the moment it is a tool that I sometimes need to use as opposed to want to use. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • Observations From The Corner of a Starbucks

    - by Chris Williams
    I’ve spent the last 3 days sitting in a Starbucks for 4-8 hours at a time. As a result, I’ve observed a lot of interesting behavior and people (most of whom were uninteresting themselves.) One of the things I’ve noticed is that most people don’t sit down. They come in, get their drink and go. The ones that do sit down, stay much longer than it takes to consume their drink. The drink is just an incidental purchase. Certainly not the reason they are here. Most of the people who sit also have laptops. Probably around 75%. Only a few have kids (with them) but the ones that do, have very small kids. Toddlers or younger. Of all the “campers” only a small percentage are wearing headphone, presumably because A) external noise doesn’t bother them or B) they aren’t working on anything important. My buddy George falls into category A, but he grew up in a house full of people. Silence freaks him out far more than noise. My brother and I, on the other hand, were both only children and don’t handle noisy distractions well. He needs it quiet (like a tomb) and I need music. Go figure… I can listen to Britney Spears mixed with Apoptygma Berzerk and Anthrax and crank out 30 pages, but if your toddler is banging his spoon on the table, you’re getting a dirty look… unless I have music, then all is right with the world. Anyway, enough about me. Most of the people who come in as a group are smiling when they enter. Half as many are smiling when they leave. People who come in alone typically aren’t smiling at all. The average age, over the last three days seems to be early 30s… with a couple of senior citizens and teenagers at either end of the curve. The teenagers almost never stay. They have better stuff to do on a nice day. The senior citizens are split nearly evenly between campers and in&outs. Most of the non-solo campers have 1 person with a laptop, while the other reads the paper or a book. Some campers bring multiple laptops… but only really look at one of them. This Starbucks has a drive through. The line is almost never more than 2-3 cars long but apparently a lot of the in&out people would rather come in and stand in line behind (up to) 5 people. The music in here sucks. My musical tastes can best be described as eclectic to bad, but I can still get work done (see above.) I find the music in this particular Starbucks to be discordant and jarring. At this Starbucks, the coffee lingo is apparently something that is meant to occur between employees only. The nice lady at the counter can handle orders in plain English and translate them to Baristaspeak (Baristese?) quite efficiently. If you order in Baristaspeak however, she will look confused and repeat your order back to you in plain English to confirm you actually meant what you said. Then she will say it in Baristaspeak to the lady making your drink. Nobody in this Starbucks (other than the Baristas) makes eye-contact… at least not with me. Of course that may be indicative of a separate issue. ;)

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • Error:couldn't read file-Kernel panic

    - by Thanos
    I have just installed ubuntu 12.04.1. To be honest I had to run installation several times until it was finished fine. When I finally managed to install it properly, I power on the laptop and the grub shoed up! I selected ubuntu generic. It takes some time to load and when it does I get an error message stating that error: couldn't read file Press any key to continue If I press any button nothing happens. If a leave it there, in a short while there is a black screen loading which gives some weird messages [0.946710] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.946755] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.946792] Call Trace: [0.946831] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.946869] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.946909] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.946947] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.946982] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.947019] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.947094] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.947129] [<ffffffff81664030>] ? gs_change+0x13/0x13 The thing is that the laptop isn't mine. A friend tried to dual boot ubuntu alongside windows 7 but he didn't succeed. Ubuntu option was in grub, but when you tried to boot it rebooted from the start. So from a Live CD I erased ubuntu, started windows to check if something went wrong, and fortunately everything was OK. Windows started normally! So I tried to install ubuntu. Before installation was completed the installer crashed! I was afraid that he would lost windows, something that was true... At that point I tried to install windows but whichever distro(XP, 7{home, proffessional, ultimate},8) I tried it could never reach the end. So I tried to reinstall ubuntu but I was facing those weird messages. What can I do to move on? ______________________________________________________________________ EDIT1: I tried to check and fix(if possible) with GParted it took a lot of hours,although gparted displays only 01:14, I restarted the system and now I get not exactly the same messages. Numbers in braces [ ] are different [0.818189] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.818235] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.818272] Call Trace: [0.818312] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.818351] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.818391] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.818428] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.818464] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.818501] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.818574] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.818610] [<ffffffff81664030>] ? gs_change+0x13/0x13 What on earth is going on? ______________________________________________________________________ EDIT2: I forgot to mention that my friend gave a punch to his laptop during a game. After that his cooler became to make a weird noise so I checked and it is a bit tortuous but it is working. What I beleive must be wrong is that his HDD makes a weird noise while trying to load ubuntu, which means he might need a new HDD. Could that be true?

    Read the article

  • Looking for a good world map generation algorithm

    - by FalconNL
    I'm working on a Civilization-like game and I'm looking for a good algorithm for generating Earth-like world maps. I've experimented with a few alternatives, but haven't hit on a real winner yet. One option is to generate a heightmap using perlin noise and add water at a level so that about 30% of the world is land. While perlin noise (or similar fractal-based techniques) are frequently used for terrain and is reasonably realistic, it doesn't offer much in the way of control over the number, size and position of the resulting continents, which I'd like to have from a gameplay perspective. See http://farm3.static.flickr.com/2792/4462870263_ff26c40365_o.jpg for an example (sorry, can't post pictures yet). A second option is to start with a randomly positioned one-tile seed (I'm working on a grid of tiles), determine the desired size for the continent and each turn add a tile that is horizontally or vertically adjacent to the existing continent until you've reached the desired size. Repeat for the other continents. This technique is part of the algorithm used in Civilization 4. The problem is that after placing the first few continents, it's possible to pick a starting location that's surrounded by other continents, and thus won't fit the new one. Also, it has a tendency to spawn continents too close together, resulting in something that looks more like a river than continents. See http://farm5.static.flickr.com/4069/4462870383_46e86b155c_o.jpg for an example. Does anyone happen to know a good algorithm for generating realistic continents on a grid-based map while keeping control over their number and relative sizes?

    Read the article

  • Reinforcement learning toy project

    - by Betamoo
    My toy project to learn & apply Reinforcement Learning is: - An agent tries to reach a goal state "safely" & "quickly".... - But there are projectiles and rockets that are launched upon the agent in the way. - The agent can determine rockets position -with some noise- only if they are "near" - The agent then must learn to avoid crashing into these rockets.. - The agent has -rechargable with time- fuel which is consumed in agent motion - Continuous Actions: Accelerating forward - Turning with angle I need some hints and names of RL algorithms that suit that case.. - I think it is POMDP , but can I model it as MDP and just ignore noise? - In case POMDP, What is the recommended way for evaluating probability? - Which is better to use in this case: Value functions or Policy Iterations? - Can I use NN to model environment dynamics instead of using explicit equations? - If yes, Is there a specific type/model of NN to be recommended? - I think Actions must be discretized, right? I know it will take time and effort to learn such a topic, but I am eager to.. You may answer some of the questions if you can not answer all... Thanks

    Read the article

  • How would the conversion of a custom CMS using a text-file-based database to Drupal be tackled?

    - by James Morris
    Just today I've started using Drupal for a site I'm designing/developing. For my own site http://jwm-art.net I wrote a user-unfriendly CMS in PHP. My brief experience with Drupal is making me want to convert from the CMS I wrote. A CMS whose sole method (other than comments) of automatically publishing content is by logging in via SSH and using NANO to create a plain text file in a format like so*: head<<END_HEAD title = Audio keywords= open,source,audio,sequencing,sampling,synthesis descr = Music, noise, and audio, created by James W. Morris. parent = home END_HEAD main<<END_MAIN text<<END_TEXT Digital music, noise, and audio made exclusively with @=xlink=http://www.linux-sound.org@:Linux Audio Software@_=@. END_TEXT image=gfb@--@;Accompanying image for penonpaper-c@right ilink=audio_2008 br= ilink=audio_2007 br= ilink=audio_2006 END_MAIN info=text<<END_TEXT I've been making PC based music since the early nineties - fortunately most of it only exists as tape recordings. END_TEXT ( http://jwm-art.net/dark.php?p=audio - There's just over 400 pages on there. ) *The jounal-entry form which takes some of the work out of it, has mysteriously broken. And it still required SSH access to copy the file to the main dat dir and to check I had actually remembered the format correctly and the code hadn't mis-formatted anything (which it always does). I don't want to drop all the old content (just some), but how much work would be involved in converting it, factoring into account I've been using Drupal for a day, have not written any PHP for a couple of years, and have zero knowledge of SQL? How might a team of developers tackle this? How do-able is it for one guy in his spare time?

    Read the article

  • Ideas on C# DSL syntax

    - by Dmitriy Nagirnyak
    Hi, I am thinking about implementing a templating engine using only the plain C#/.NET 4 syntax with the benefit of static typing. Then on top of that templating language we could create Domain Specific Languages (let's say HTML4, XHTML, HTML5, RSS, Atom, Multipart Emails and so on). One of the best DSLs in .NET 4 (if not only one) is SharpDOM. It implements HTML-specific DSL. Looking at SharpDOM, I am really impressed of what you can do using .NET (4). So I believe there are some not-so-well-known ways for implementing custom DSLs in .NET 4. Possibly not as well as in Ruby, but still. So my the question would be: what are the C# (4) specific syntax features that can be used for implementing custom DSLs? Examples I can think of right now: // HTML - doesn't look tooo readable :) div(clas: "head", ul(clas: "menu", id: "main-menu", () => { foreach(var item in allItems) { li(item.Name) } }) // See how much noise it has with all the closing brackets? ) // Plain text (Email or something) - probably too simple Line("Dear {0}", user.Name); Line("You have been kicked off from this site"); For me it is really hard to come up with the syntax with least amount of noise. Please NOTE that I am not talking about another language (Boo, IronRuby etc), neither I am not talking about different templating engines (NHaml, Spark, StringTemplate etc). Thanks, Dmitriy.

    Read the article

  • Custom .NET DSL

    - by Dmitriy Nagirnyak
    Hi, I am thinking about implementing a templating engine using only the plain C#/.NET 4 syntax with the benefit of static typing. Then on top of that templating language we could create Domain Specific Languages (let's say HTML4, XHTML, HTML5, RSS, Atom, Multipart Emails and so on). One of the best DSLs in .NET 4 (if not only one) is SharpDOM. It implements HTML-specific DSL. Looking at SharpDOM, I am really impressed of what you can do using .NET (4). So I believe there are some not-so-well-known ways for implementing custom DSLs in .NET 4. Possibly not as well as in Ruby, but still. So my the question would be: what are the C# (4) specific syntax features that can be used for implementing custom DSLs? Examples I can think of right now: // HTML - doesn't look tooo readable :) div(clas: "head", ul(clas: "menu", id: "main-menu", () => { foreach(var item in allItems) { li(item.Name) } }) // See how much noise it has with all the closing brackets? ) // Plain text (Email or something) - probably too simple Line("Dear {0}", user.Name); Line("You have been kicked off from this site"); For me it is really hard to come up with the syntax with least amount of noise. Please NOTE that I am not talking about another language (Boo, IronRuby etc), neither I am not talking about different templating engines (NHaml, Spark, StringTemplate etc). Thanks, Dmitriy.

    Read the article

  • After interruption, delayed audio route change notifications when recording

    - by Frank Shearar
    My iPhone application requires that I know when a user has/has not plugged in her headphones. That's easy. AudioSessionAddPropertyListener with a callback listening to kAudioSessionProperty_AudioRouteChange. I write logs with NSLog as things happen. User plugs the headphones in? Get a notification, and a line in the gdb console. User unplugs the headphones? Ditto. At the same time I'm sensing the noise level of the environment by starting a recording audio queue. This, too, works great: I can get the mic noise level and listen for audio route changes just fine. What I find is that after an interruption, and I've reactivated the audio session and restored the audio category to kAudioSessionCategory_RecordAudio, the audio route notifications go a bit haywire. When I plug in the headphones, I see no notification. When I unplug the headphones I see BOTH the "plugged in" notification AND the "unplugged" notification, in rapid succession. It's like the "plugged in" notification's delayed and, when the "unplugged" notification arrives, the queue of pending notifications is flushed. What am I doing wrong? How do I correctly restore the audio session to get timeous notifications? EDIT: iPhone OS 3.1.2, running on an iPhone 3G. I'm running a program compiled with the 3.0 SDK (from within XCode 3.1.2).

    Read the article

  • Android Compass orientation on unreliable (Low pass filter)

    - by madsleejensen
    Hi all Im creating an application where i need to position a ImageView depending on the Orientation of the device. I use the values from a MagneticField and Accelerometer Sensors to calculate the device orientation with SensorManager.getRotationMatrix(rotationMatrix, null, accelerometerValues, magneticFieldValues) SensorManager.getOrientation(rotationMatrix, values); double degrees = Math.toDegrees(values[0]); My problem is that the positioning of the ImageView is very sensitive to changes in the orientation. Making the imageview constantly jumping around the screen. (because the degrees change) I read that this can be because my device is close to things that can affect the magneticfield readings. But this is not the only reason it seems. I tried downloading some applications and found that the "3D compass" application remains extremely steady in its readings, i would like the same behavior in my application. I read that i can tweak the "noise" of my readings by adding a "Low pass filter", but i have no idea how to implement this (because of my lack of Math). Im hoping someone can help me creating a more steady reading on my device, Where a little movement to the device wont affect the current orientation. Right now i do a small if (Math.abs(lastReadingDegrees - newReadingDegrees) > 1) { updatePosition() } To filter abit of the noise. But its not working very well :)

    Read the article

  • Matlab fft function

    - by CTZStef
    The code below is from the Matlab 2011a help about fft function. I think there is a problem here : why do they multiply t(1:50) by Fs, and then say it's time in millisecond ? Certainly, it happens to be true in this very particular case, but change the value of Fs to, say, 2000, and it won't work anymore, obviously because of this factor of 2. Right ? Quite misleading, isn't it ? What do I miss ? Fs = 1000; % Sampling frequency T = 1/Fs; % Sample time L = 1000; % Length of signal t = (0:L-1)*T; % Time vector % Sum of a 50 Hz sinusoid and a 120 Hz sinusoid x = 0.7*sin(2*pi*50*t) + sin(2*pi*120*t); y = x + 2*randn(size(t)); % Sinusoids plus noise plot(Fs*t(1:50),y(1:50)) title('Signal Corrupted with Zero-Mean Random Noise') xlabel('time (milliseconds)') Clearer with this : fs = 2000; % Sampling frequency T = 1 / fs; % Sample time L = 1000; % Length of signal t2 = (0:L-1)*T; % Time vector f = 50; % signal frequency s2 = sin(2*pi*f*t2); figure, plot(fs*t2(1:50),s2(1:50)); % NOT good figure, plot(t2(1:50),s2(1:50)); % good

    Read the article

  • Windows Azure: Announcing Support for Windows Server 2012 R2 + Some Nice Price Cuts

    - by ScottGu
    Today we released some great updates to Windows Azure: Virtual Machines: Support for Windows Server 2012 R2 Cloud Services: Support for Windows Server 2012 R2 and .NET 4.5.1 Windows Azure Pack: Use Windows Azure features on-premises using Windows Server 2012 R2 Price Cuts: Up to 22% Price Reduction on Memory-Intensive Instances Below are more details about each of the improvements: Virtual Machines: Support for Windows Server 2012 R2 This morning we announced the release of Windows Server 2012 R2 – which is a fantastic update to Windows Server and includes a ton of great enhancements. This morning we are also excited to announce that the general availability image of Windows Server 2012 RC is now supported on Windows Azure.  Windows Azure is the first cloud provider to offer the final release of Windows Server 2012 R2, and it is incredibly easy to launch your own Windows Server 2012 R2 instance with it. To create a new Windows Server 2012 R2 instance simply choose New->Compute->Virtual Machine within the Windows Azure Management Portal.  You can select the “Windows Server 2012 R2” image and create a new Virtual Machine using the “Quick Create” option: Or alternatively click the “From Gallery” option if you want to customize even more configuration options (endpoints, remote powershell, availability set, etc): Creating and instantiating a new Virtual Machine on Windows Azure is very fast.  In fact, the Windows Server 2012 R2 image now deploys and runs 30% faster than previous versions of Windows Server. Once the VM is deployed you can drill into it to track its health and manage its settings: Clicking the “Connect” button allows you to remote desktop into the VM – at which point you can customize and manage it as a full administrator however you want: If you haven’t tried Windows Server 2012 R2 yet – give it a try with Windows Azure.  There is no easier way to get an instance of it up and running! Cloud Services: Support for using Windows Server 2012 R2 with Web and Worker Roles Today’s Windows Azure release also allows you to now use Windows Server 2012 R2 and .NET 4.5.1 within Web and Worker Roles within Cloud Service based applications.  Enabling this is easy.  You can configure existing existing Cloud Service application to use Windows Server 2012 R2 by updating your Cloud Service Configuration File (.cscfg) to use the new “OS Family 4” setting: Or alternatively you can use the Windows Azure Management Portal to update cloud services that are already deployed on Windows Azure.  Simply choose the configure tab on them and select Windows Server 2012 R2 in the Operating System Family dropdown: The approaches above enable you to immediately take advantage of Windows Server 2012 R2 and .NET 4.5.1 and all the great features they provide. Windows Azure Pack: Use Windows Azure features on Windows Server 2012 R2 Today we also made generally available the Windows Azure Pack, which is a free download that enables you to run Windows Azure Technology within your own datacenter, an on-premises private cloud environment, or with one of our service provider/hosting partners who run Windows Server. Windows Azure Pack enables you to use a management portal that has the exact same UI as the Windows Azure Management Portal, and within which you can create and manage Virtual Machines, Web Sites, and Service Bus – all of which can run on Windows Server and System Center.  The services provided with the Windows Azure Pack are consistent with the services offered within our Windows Azure public cloud offering.  This consistency enables organizations and developers to build applications and solutions that can run in any hosting environment – and which use the same development and management approach.  The end result is an offering with incredible flexibility. You can learn more about Windows Azure Pack and download/deploy it today here. Price Cuts: Up to 22% Reduction on Memory Intensive Instances Today we are also reducing prices by up to 22% on our memory-intensive VM instances (specifically our A5, A6, and A7 instances).  These price reductions apply to both Windows and Linux VM instances, as well as for Cloud Service based applications: These price reductions will take effect in November, and will enable you to run applications that demand larger memory (such as SharePoint, Databases, in-memory analytics, etc) even more cost effectively. Summary Today’s release enables you to start using Windows Server 2012 R2 within Windows Azure immediately, and take advantage of our Cloud OS vision both within our datacenters – and using the Windows Azure Pack within both your existing datacenters and those of our partners. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >