Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 196/258 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-05

    - by Bob Rhubart
    OTN Architect Day - Boston - Sept 12: What to Expect If you've never attended an OTN Architect Day, here's a little preview. You start with a continental breakfast. Then you have keynotes by an Oracle expert, and a member of the Oracle ACE community. After that come the break-out sessions, so you have your choice of two sessions in each time slot. So you'll get in two breakouts before lunch. Then you eat. After that there's a panel Q&A during which the audience tosses questions at the assembled session speakers. Then it's on to another set of break-out sessions, followed by a short break. Then the audience breaks into small groups for round table discussions. After that there's a drawing for some cool prizes, followed by the cocktail reception. All that costs you absolutely zero. Register now. Starting and Stopping Fusion Applications the Right Way | Ronaldo Viscuso While the fastartstop tool that ships with Oracle Fusion Applications does most of the work to start/stop/bounce the Fusion Apps environment, it does not do it all. Oracle Fusion Applications A-Team blogger Ronaldo Viscuso's post "aims to explain all tasks involved in starting and stopping a Fusion Apps environment completely." Dodeca Customer Feedback - The Rosewood Company | Tim Tow Oracle ACE Director Tim Tow shares anecdotal comments from one of his clients, a company that is deploying Dodeca to replace an aging VBA/Essbase application. Configuring UCM cache to check for external Content Server changes | Martin Deh Oracle WebCenter and ADF A-Team blogger Martin Deh shares the background information and the solution to a recently encountered customer scenario. Proxy As Upgrade to 11g Does Not Like NQSession.User | Art of Business Intelligence "In Oracle BI 10g the application was a lot more tolerant of bad design and cavalier usage of variables," observes Oracle ACE Christian Screen. "We noticed an issue recently during an upgrade where the Proxy As configuration in Oracle BI 10g used the NQSession.User variable to identify the user logged into Presentation Servers acting as Proxy." Oracle WebLogic Server 11g: Interactive Quick Reference | Dirk Nachbar Oracle ACE Dirk Nachbar shares a quick post with information on a new interactive reference guide to Oracle WebLogic Server. "The Quick Reference shows you an architecural overview of the Oracle WebLogic Server processes, tools, configuration files, log files and so on including a short description of each section and the corresponding link to the Oracle WebLogic Server Documentation," says Nachbar. Thought for the Day "In fast moving markets, adaptation is significantly more important than optimization." — Larry Constantine Source: Quotes for Software Engineers

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • Live CD has black screen HP DV6

    - by Shaun Killingbeck
    Attempting to install/try ubuntu (11.10, 12.04) on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0, and also now i915.modeset=1 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. Update: Tried the ?orkarounds on https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen - set gfxpayload=text changed nothing, removing splash did nothing and setting vesafb.nonsense=1 did nothing either. I'd like to be able to collect some log information somehow, but I can't get to a command line from the liveCD. tried using the latest 12.04 beta, same issue tried nomodeset without splash or quiet. get the following (tail of) output before it freezes on that screen: * Starting configure network device security [OK] * Starting configure network device [OK] [ 25.720899] ieee80211 phy0: w1_ops_config: change monitor mode: false (implement) [ 25.720923] ieee80211 phy0: w1_ops_config: change power-save mode: false (implement) * Starting restore sound card(s') mixer state(s) [fail] [ 25.721849] ieee80211 phy0: w1_ops_bss_info_changed: qos enabled: false (implement) * Stopping save kernel messages [OK] * Starting bluetooth [OK] * PulseAudio configured for per-user sessions saned disabled; edit /etc/default/saned [ 25.988016] hci_cmd_timer: hci0 command tx timeout [ 26.207225] bad LUN (0:1) [ 26.223735] bad target number (1:0) [ 26.252111] bad target number (2:0) [ 26.272170] bad target number (3:0) [ 26.300154] bad target number (4:0) [ 26.328162] bad target number (5:0) [ 26.344180] bad target number (6:0) [ 26.368142] bad target number (7:0) * Checking battery state... [OK] * Stopping System V runlevel capability [OK] Does this give any indication of the problem? the false (implement) messages also reappear when I press the power button to ask it to shutdown, followed by a [fail] status for killing remaining processes.

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • Google search results are invalid

    - by Rufus
    I'm writing a program that lets a user perform a Google search. When the result comes back, all of the links in the search results are links not to other sites but to Google, and if the user clicks on one, the page is fetched not from the other site but from Google. Can anyone explain how to fix this problem? My Google URL consists of this: http://google.com/search?q=gargle But this is what I get back when the user clicks on the Wikipedia search result, which was http://www.google.com/url?q=http://en.wikipedia.org/wiki/Gargling&sa=U&ei=_4vkT5y555Wh6gGBeOzECg&ved=0CBMQejAe&usg=AFQjeNHd1eRV8Xef3LGeH6AvGxt-AF-Yjw <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" dir="ltr" class="client-nojs" xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Gargling - Wikipedia, the free encyclopedia</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Style-Type" content="text/css" /> <meta name="generator" content="MediaWiki 1.20wmf5" /> <meta http-equiv="last-modified" content="Fri, 09 Mar 2012 12:34:19 +0000" /> <meta name="last-modified-timestamp" content="1331296459" /> <meta name="last-modified-range" content="0" /> <link rel="alternate" type="application/x-wiki" title="Edit this page" > <link rel="edit" title="Edit this page" > <link rel="apple-touch-icon" > <link rel="shortcut icon" > <link rel="search" type="application/opensearchdescription+xml" > <link rel="EditURI" type="application/rsd+xml" > <link rel="copyright" > <link rel="alternate" type="application/atom+xml" title="Wikipedia Atom feed" > <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=ext.gadget.teahouse%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%2Cshared%7Cskins.vector&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">#mwe-lastmodified { display: none; }</style><meta name="ResourceLoaderDynamicStyles" content="" /> <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=site&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">a:lang(ar),a:lang(ckb),a:lang(fa),a:lang(kk-arab),a:lang(mzn),a:lang(ps),a:lang(ur){text-decoration:none} /* cache key: enwiki:resourceloader:filter:minify-css:7:d5a1bf6cbd05fc6cc2705e47f52062dc */</style>

    Read the article

  • Java EE 8 update

    - by delabassee
    Planning for Java EE 8 is now well underway. As you know, a few weeks ago, we conducted a three part Java EE 8 Community Survey (you can find the final summary here). The data gathered have been very influential for the next steps. You can now expect over the coming weeks and months to see updates on the various specifications that compose the Java EE platform. Some Specification Leads are busy gathering additional feedback regarding what they should focus their efforts on (e.g. CDI 2 survey). Other Specification Leads have already publicly exposed what they think should be one of the focus for the evolution of the specification they lead.  For example, adding Server-Senet Events (SSE) support in JAX-RS is being discussed here and adding MVC support is being discussed here. Please remember that the fact we are now discussing any feature does not insure that it will be included in the proposal, nor in any particular update to Java EE. We can expect additional enhancements, changes and evolutions as we get closer to the finalisation of the different specifications... and there is still a long way to go with these specification proposals! Linda DeMichiel, Java EE Co-Specification Lead, has recently posted a draft proposal for the Java EE 8 Platform specification. Linda's goal is to recruit people and companies supporting this proposal before submitting it to the JCP.  This draft proposal is very interesting reading as it contains relevant information on the plans for Java EE 8 such as : The themes: Support for the latest web standards (eg. HTTP 2.0)  Continue to work on ease of development Improve the infrastructure for cloud support Alignment with Java SE 8 New JSRs to be added to the platform: J-Cache Java API for JSON Binding Java Configuration Plans for the Web Profile Plans on technologies to prune in Java EE 8, ... So if you haven't done it yet, I really encourage you to read the Java EE 8 draft proposal! Our goal for the Java EE 8 specification is for it to be finalized in the second half of 2016. It is important to note that we are in the early days of Java EE 8 and at this stage everything (themes, content, timing, etc.) is preliminary. Everything still needs to be discussed, challenged and agreed within the different Java Community Process (JCP) Experts Groups (EGs). Some EGs that still need to be formed! It could also means that the roadmap will have to be adjusted to follow the progress being made in the different EGs. This is also a good occasion to remind you that participation within those upcoming JCP Experts Groups is encouraged. Contributing in an EG is an effective lever to influence what Java EE 8 will become! Finally, as things get more concrete, we will share details on how to engage in the different Java EE 8 related Adopt-a-JSR initiatives, another way to contribute. You can also read other posts related to Java EE 8, here at The Aquarium blog. Just look for articles with the 'javaee8' tag.

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • eSTEP Newsletter for October 2013 now available

    - by uwes
    Dear Partners,We would like to let you know that the October'13 issue of our Newsletter is now available.The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan VosYou can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Why does the login screen fail to appear?

    - by a different ben
    My system: Dell Precision T3500 nVidia Quadro NVS 295 Ubuntu 12.04 x86_64 (3.2.0-32) Essential problem: On boot my system won't get past the splash screen. I can switch to another virtual terminal and log in, I can also ssh from another system -- so it appears that the problem might be with the display manager. How can I diagnose and fix this problem? More info: From a VT I can issue sudo lightdm restart, and this will bring up the login screen and and I can continue from there. So I do have access to my system. Update-manager recently updated a number of packages, including a bunch of x11 and xorg packages, some nVidia drivers, rpcbind, etc etc. My boot log (if that is any guidance) says the following: fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 rpcbind: Cannot open '/run/rpcbind/rpcbind.xdr' file for reading, errno 2 (No such file or directory) rpcbind: Cannot open '/run/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory) /dev/sda1: clean, 597650/1525920 files, 3963433/6103296 blocks /dev/sda7: clean, 11/6406144 files, 450097/25608703 blocks /dev/sda5: clean, 158323/1525920 files, 1886918/6103296 blocks /dev/sda8: clean, 250089/107929600 files, 111088810/431689728 blocks Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd * Starting AppArmor profiles [80G [74G[ OK ] Loading the saved-state of the serial devices... /dev/ttyS0 at 0x03f8 (irq = 4) is a 16550A * Starting ClamAV virus database updater freshclam [80G [74G[ OK ] * Starting Name Service Cache Daemon nscd [80G [74G[ OK ] * Starting modem connection manager[74G[ OK ] * Starting K Display Manager[74G[ OK ] * Starting mDNS/DNS-SD daemon[74G[ OK ] * Stopping GNOME Display Manager[74G[ OK ] * Stopping K Display Manager[74G[ OK ] * Starting bluetooth daemon[74G[ OK ] * Starting network connection manager[74G[ OK ] * Starting Postfix Mail Transport Agent postfix [80G [74G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting VirtualBox kernel modules [80G [74G[ OK ] * Starting the Winbind daemon winbind [80G [74G[ OK ] saned disabled; edit /etc/default/saned * Starting anac(h)ronistic cron[74G[ OK ] * Stopping anac(h)ronistic cron[74G[ OK ] * Checking battery state... [80G [74G[ OK ] nxsensor is disabled in '/usr/NX/etc/node.cfg' Trying to start NX server: NX 122 Service started. NX 999 Bye. Trying to start NX statistics: NX 723 Cannot start NX statistics: NX 709 NX statistics are disabled for this server. NX 999 Bye. * Stopping System V runlevel compatibility[74G[ OK ] * Starting Mount network filesystems[74G[ OK ] * Stopping Mount network filesystems[74G[ OK ] * Stopping regular background program processing daemon[74G[ OK ] * Starting regular background program processing daemon[74G[ OK ] * Starting anac(h)ronistic cron[74G[ OK ] * Stopping anac(h)ronistic cron[74G[ OK ]

    Read the article

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Criteria for selecting timeout value?

    - by stijn
    Situation: a piece of software reads frames of data from a file in a seperate thread and puts it on a queue, emptied by another thread. That second thread periodically checks on the queue and fails rather gracefully, by showing an error message stating the read timed out, if no data is available within a certain amount of time. Initially this timeout was set to 200mSec. There was no real reasoning behind that constant though, but it worked fine. We measured on a couple of machines and for large data frames, larger than what would be used by customers, a read took like 20mSec whith no other load on the machine. However one customer now gets timeout errors now and then (on the second try all is fine, probably the file is in cache or the virus scanner leaves it alone). The programmers are like 'well, yeah, but that customer's machine is full of cruft, virus scanners, tons of unneeded background processes etc'. Of course the customer is like 'hey this should just work, shouldn't it'? While the programers have a point, since the software is heavy enough to validate the need for a dedicated machine, that does not make the customer happy. Increasing the timeout to 2 seconds, for example, solves the problem. But I'd like to make a proper decision now instead of just randomly pick some magic constant that is probably ok in 99% of cases. What criteria should be used for that? We could just pick a large number, but that feels wrong. (and then we end up with a program that has the horrible bahaviour of hanging when trying to read from a disconnected drive for instance, whereas we'd rather make it show an error right away). Or we could make the timeout value a user setting, but then we need to ducument it clearly and even then not all customers are tech savy enough to really understand what it does. Or we could try and wait until another customer reports timeouts and increase the value again. And again. Until we find something ok for 99.99% of the cases.. Any good practice for this type of situation?

    Read the article

  • Combobox binding with different types

    - by George Evjen
    Binding to comboboxes in Silverlight has been an adventure the past couple of days. In our framework at ArchitectNow we use LookupGroups and LookupValues. In our database we would have a LookupGroup of NBA Teams for example. The group would be called NBATeams, we get the LookupGroupID and then get the values from the LookupValues table. So we would end up with a list of all 30+ teams. Our lookup values entity has a display text(string), value(string), IsActive and some other fields. With our applications we load all this information into the system when the user is logging in or right after they login. So in cache we have a list of groups and values that we can get at whenever we want to. We get this information in our framework simply by creating an observable collection of type LookupValue. To get a list of these values into our property all we have to do is. var NBATeams = AppContext.Current.LookupSerivce.GetLookupValues(“NBATeams”); Our combobox then is bound like this. (We use telerik components in most if not all our projects) <telerik:RadComboBox ItemsSource="{Binding NBATeams}”></telerik:RadComboBox> This should give you a list in your combobox. We also set up another property in our ViewModel that is a just single object of NBATeams  - “SelectedNBATeam” Our selectedItem in our combobox would look like, we would set this to a two way binding since we are sending data back. SelectedItem={Binding SelectedNBATeam, mode=TwoWay}” This is all pretty straight forward and we use this pattern throughout all our applications. What do you do though when you have a combobox in a ItemsControl or ListBox? Here we have a list of NBA Teams that are a string that are being brought back from the database. We cant have the selected item be our LookupValue because the data is a string and its being bound in an ItemsControl. In the example above we would just have the combobox in a form. Here though we have it in a ItemsControl, where there is no selected item from the initial ItemsSource. In order to get the selected item to be displayed in the combobox you have to convert the LookupValue to a string. Then instead of using SelectedItem in the combobox use SelectedValue. To convert the LookupValue we do this. Create an observable collection of strings public ObservableCollection<string> NBATeams { get; set;} Then convert your lookups to strings var NBATeams = new ObservableCollection<string>(AppContext.Current.LookupService.GetLookupValues(“NBATeams”).Select(x => x.DisplayText)); This will give us a list of strings and our selected value should be bound to the NBATeams property in our ItemsSource in our ItemsControl. SelectedValue={Binding NBATeam, mode=TwoWay}”

    Read the article

  • Brocken package manager due to incorrect Banshee package

    - by user54974
    Sup, so, I'm not familiar with linux at all so help is much appreciated. I've been trying to boot my pc up from a live CD unsuccessfully. I get to the stage at which there are the options to test without installing or install or so on where I select 'Install Ubuntu.' Here it relays through some fast DOS commands until it reaches 'end trace' and then, eventually, 'Killed.' I have already got a functional 11.10 version installed, could this be a problem? The reason I am attempting a reinstall is because the package system is damaged inside 11.10, a problem I can't seem to solve. If I try to install any new software from within the software centre it tells me that two banshee extensions must be removed. I try to remove these from inside the terminal, using apt-get remove, which results in:** You might want to run apt-get -f install to correct these: The following packages have unmet dependencies. banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). The software centre suggests that I disable all third party repositories and run apt-get install -f I have done so but the package system remains damaged and apt-get install -fattempts to install banshee 2.2.1 but returns: Errors were encountered while processing: /var/cache/apt/archives/banshee_2.2.1-1ubuntu3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried apt-get update (runs fine) and apt-get upgrade. The upgrade command apt-get upgrade results in: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. banshee-extension-soundmenu : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed E: Unmet dependencies. Try using -f. I seem to be going round and round in circles here! If only I could reinstall successfully. Only proposed updates (oneiric proposed) is not enabled.

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

  • JavaOne 2012: Camel, Twitter, Coherence, Wicket and GlassFish

    - by Bruno.Borges
    Before joining Oracle as Product Manager for WebLogic and GlassFish for Latin America, at the beggining of this year I proposed two talks to JavaOne USA that I had been presenting in Brazil for quite a while. One of them I presented last year at ApacheCon in Vancouver, Canada as well in JavaOne Brazil. In June I got the news that they were accepted as Alternate Sessions. Surprisingly enough, few weeks later and at the same time I joined Oracle, I received the news that they were officially accepted and put on schedule. Tomorrow I'll be flying to San Francisco, to my first JavaOne in the United States, and I wanted to share with you what I'm going to present there. My two sessions are these ones: Wed, 10/03, 4:30pm - CON2989 Leverage Enterprise Integration Patterns with Apache Camel and TwitterOn this one, you will be introducted to the Apache Camel framework that I had been talking about in Brazil at conferences, before joining Oracle, and to a component I contributed to integrate with Twitter. Also, you will have a preview of a new component I've been working on to integrate Camel with the Oracle Coherence distributed cache. Thu, 10/04, 3:30pm - CON3395 How Scala, Wicket, and Java EE Can Improve Web DevelopmentThis one I've been working on for quite a while. It was based on an idea to have an architecture that could be as agile as frameworks and technologies such as Ruby on Rails, PHP or Python, for rapid web development. You will be introduced to the Apache Wicket framework, another Apache project I enjoy working with and gave lots of talks at Brazilian conferences, including JavaOne Brazil, JustJava, QCon SP, and The Developers Conference. You will also be introduced to the Scala language and how to create nice DSLs to boost productiveness. And last but not least, the Java EE 6 platform, that offers an awesome improvement from previous versions with its CDI, JPA, EJB3 and JAX-RS features for web development. Other events I will be participating during my stay in SF: Geeks Bike Ride GlassFish Community Event GlassFish and Friends Party    If you have any other event to suggest, please do suggest! It's my first JavaOne and I'm really looking forward to enjoying everything. See you guys in a few days!!

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • eSTEP Newsletter for October 2013 Now Available

    - by Cinzia Mascanzoni
    The October'13 issue of our Newsletter is now available. The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence. Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan Vos You can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below. URL: http://launch.oracle.com/ PIN: eSTEP_2011 Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab.

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • Best way to determine surface normal for a group of pixels?

    - by Paul Renton
    One of my current endeavors is creating a 2D destructible terrain engine for iOS Cocos2D (See https://github.com/crebstar/PWNDestructibleTerrain ). It is in an infant stages no doubt, but I have made significant progress since starting a couple weeks ago. However, I have run into a bit of a performance road block with calculating surface normals. Note: For my destructible terrain engine, an alpha of 0 is considered to not be solid ground. The method posted below works just great given small rectangles, such as n < 30. Anything above 30 causes a dip in the frame rate. If you approach 100x100 then you might as well read a book while the sprite attempts to traverse the terrain. At the moment this is the best I can come up with for altering the angle on a sprite as it roams across terrain (to get the angle for a sprite's orientation just take dot product of 100 * normal * (1,0) vector). -(CGPoint)getAverageSurfaceNormalAt:(CGPoint)pt withRect:(CGRect)area { float avgX = 0; float avgY = 0; ccColor4B color = ccc4(0, 0, 0, 0); CGPoint normal; float len; for (int w = area.size.width; w >= -area.size.width; w--) { for (int h = area.size.height; h >= -area.size.height; h--) { CGPoint pixPt = ccp(w + pt.x, h + pt.y); if ([self pixelAt:pixPt colorCache:&color]) { if (color.a != 0) { avgX -= w; avgY -= h; } // end inner if } // end outer if } // end inner for } // end outer for len = sqrtf(avgX * avgX + avgY * avgY); if (len == 0) { normal = ccp(avgX, avgY); } else { normal = ccp(avgX/len, avgY/len); } // end if return normal; } // end get My problem is I have sprites that require larger rectangles in order for their movement to look realistic. I considered doing a cache of all surface normals, but this lead to issues of knowing when to recalculate the surface normals and these calculations also being quite expensive (also how large should the blocks be?). Another smaller issue is I don't know how to properly treat the case when length is = 0. So I am stuck... Any advice from the community would be greatly appreciated! Is my method the best possible one? Or should I rethink the algorithm? I am new to game development and always looking to learn new tips and tricks.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Things to do After installing Visual Studio Express 12.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/06/25/things-to-do-after-install-visual-studio-express-12.aspx  1. Environment > Document > Check the option for auto-load changes. By checking this option You can modified the file outside the VWD and VWD don’t tell you for confirmation. 2. Environment > Tabs and Windows > Preview Tab > uncheck the solution Explorer option. This option don’t show you the file when you just click on them in solution explorer. 3. Project and solutions > Check the track active item in solution explorer This option help you to easily figure out which file you working on and where it is in solution explorer . 4. text editor > all settings > word wrap > check this feature to enable word wrap 5. Text-editor > Css > formatting > check the compact rule. this option make you file smaller in size and easily to read. 6. Text-editor > html > miscellaneous > uncheck the auto ID option. Actually When you copy paste the html code Visual studio change their ID if ID is already exist. this option disable that feature. This is useful to do when we write if{} else {} statement and this is not helpful on that case. 7. Package manager > General > browse > copy the location of cache folder and in package source add them as source. This way you can use package that you have use earlier when you are offline. Thanks for read my post

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >