Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 160/724 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • My computer keeps crashing and I just want it FIXED

    - by xabaddon
    So long story short, I had my brother build me a computer for my birthday last year. I saw how good his was (also built by him) so I decided I would have him build me one. About a month after this was all finished, my computer started crashing when I when run over 3 programs. For example, an online game, a music player (spotify), skype, and a browser with a few tabs open. The way it would crash is the screen will freeze and a buzzing noise will come from whatever audio device I am using (headset, speakers) I took it to a computer tech guy and he told me my cooling system was inadequate. SOOO he led me to buy a CORSAIR H75 Liquid cooling system. Everything was good for about a week, and now it just did the same thing again. I am so freaking over this guys and I really need some help figuring out what the heck is wrong, I dont care how long it takes, just please help me.

    Read the article

  • Why do some bad websites rank well?

    - by BradB
    Consider the following scenario: you are pitching SEO/Website Optimisation to a prospective client and you explain to them the importance of great copy and content, how acquiring links (ethically) can increase page rank, why the quality of the HTML build matters (H1, H2 tags, w3c validation etc), why keyword research is beneficial, you may drop in a few Google Webmaster Guideline or Matt Cutts references to back up your claims and rubbish the "back hat" approach as being no longer effective for good measure. Your advice is ethical and in the eyes of best practices, spot on. Then, the client points out to you some of their long established competitors on Google and you see these competitor websites ranking in the top spots (1 to 3) for medium to highly competitive search phrases that your client wants to compete for. These websites totally contradict your ethical approach and pretty much violate every best practice previously noted. They even out perform other "white hat" competitors who are in accordance with the above guidelines. I experienced this today. One of these well ranking websites had: About six microsites with more or less the same copy and a slightly varied layout Little or not textual content I would almost say duplicate content across the sites, but there was so little of it it could barely qualify for being duplicate All the content in Flash (with a music track that kicked in on each page load, not so much of an SEO issue - but it helps paint the picture) Keyword stuffing behind the Flash file with a bunch of black text on black background in the style of keyword 1 keyword 2,keyword1,keyword 2,keyword 2 keyword 3 and so on... The exact keyword stuffed combination present on every page of the website A bunch of clearly self made links from poor quality forums and directories with little or no Page Rank Links exchanged across the microsites How do you explain your way out of this when this hard evidence is sat in front of you undermining your great pitch?

    Read the article

  • Taskbar Network "Repair" menu option gone in Windows 7

    - by JohnB
    I used to LOVE that feature in Windows XP! Just right-clicked on the WiFi icon on the Taskbar (lower right-hand corner), then Repair. It doesn't take that long, and it usually helped me since I constantly go back and forth between LAN at work, VPN on WiFi from work and home, and also VPN to my own personal server. Even if you didn't need to refresh, doing so never hurts! Unfortunately, the feature seems to be removed from Win7 :( Now you can check for problems, but it only does the refresh if it thinks it needs too, and the whole process tasks so darn long! Is there a way to Repair my WiFi connection quickly and easily on Win7?

    Read the article

  • Keyboard freezes / stuck if a key pressed repeatedly

    - by Aziz Rahmad
    I use Acer 4530. This problem has happened long since Ubuntu 10.10 and now that I use 11.04 dual booted with Linux Mint 10. Everytime I press one key repeatedly, like when I read a long article in a website/ebook or when I play games which required me to press arrow keys repeatedly, it would randomly freeze. That is, whatever I press on keyboard has no effect, and that also happens with touchpad. However, the USB mouse works just fine. I later found out that it's not actually freeze but more like it's like the key stuck. For example when I play tetris which I usually press w (down) button repeatedly, after some times it would freeze. And if I put the cursor in, say, browser's address bar, it would type "wwww....." infinitely. The only way I could fix it is by suspend the laptop, either by using mouse or by closing the lid. And instead of suspended, in that case the laptop would automatically wake up and everything will be fine. (Usually my laptop would wake up after suspended by pressing any key) It has happened since the first time I use Ubuntu, 10.10, and it also happens in Linux Mint 10, and until now in Ubuntu 11.04. It never happened when I use Windows, though. Anyone has ever encounter similar problem? Anyone know how to fix it permanently?

    Read the article

  • How do I speed up XML parsing operation?

    - by absentx
    I currently have a php script set up to do some xml parsing. Sometimes the script is set as an on page include and other times it is accessed via an ajax call. The problem is the load time for this particular page is very long. I started to think that the php I had written to find what I need in the XML was written poorly and my script is very resource intense. After much research and testing the problem is indeed not my scripting (well perhaps you could consider it a problem with my scripting), but it looks like it takes a long time to load the particular xml sources. My code is like such: $source_recent = 'my xml feed'; $source_additional = 'the other feed I need'; $xmlstr_recent = file_get_contents($source_recent); $feed_recent = new SimpleXMLElement($xmlstr_recent); $xmlstr_additional = file_get_contents($source_additional); $feed_additional = new SimpleXMLElement($xmlstr_additional); In all my testing, the above code is what takes the time, not the additional processing I do below. Is there anyway around this or am I at the mercy of the load time of the xml URL's? One crazy thought I had to get around it is to load the xml contents into a db every so often, then just query the db for what I need. Thoughts? Ideas?

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • Is DQS-in-the-cloud on its way?

    - by jamiet
    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll: March 2012 – December 2012 (10 months)Redmond, WA Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.  http://www.linkedin.com/profile/view?id=9657184  Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff. @Jamiet  

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

  • How to change system time or force a time sync

    - by cpury
    My laptop is probably running out of CMOS battery, I know I have to fix it soon, but until then, this very annoying issue keeps me from using it. Scenario: My system clock is reset to 15/12/08 11:00 AM every time I turn on my computer. This has all sorts of side-effects, one of the more annoying being that I can't log into my gmail. At first I just waited for a time sync to happen, as I have that activated and all. It never happened. I googled and didn't find any way of enforcing a time sync, which I found very strange. Is there really none? Setting the time and date by hand is also a problem. For my 12.10 installation, the time & date settings are bugged. I remember it being for my last, older installation as well, though. Of course the easiest way should be to just manually edit the date and time fields by entering a new date. This is possible in theory, but the changes are reverted as soon as the text boxes loose focus. The other way to do it is to click the +-buttons for a long, long time. The first time I did that, the changes weren't stored either. I found out that afterwards I have to switch from manual to internet-sync mode and wait ~5 seconds until the in the top left corner of my system the new time is shown, or otherwise it won't have effect. So a nice solution would be one of the following: Setting the time/date by hand, maybe via terminal, so I can just enter the right values. Or, a command that would enforce an immediate time sync, that I can run after booting. I know I have to change the batteries soon, but this is seriously keeping me from working...

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • Malaysian Airlines bans kids under 12, creates separate cabins in basement of flight

    - by Gopinath
    Kids are lot of fun to watch and play as long as they don’t start crying. Once they start crying it’s tough job for parents to calm them down and for people around it’s painful to be part of it. If it happens to be on a flight, it’s a biggest annoyance one can ever experience. Especially on long journey over night flights, it’s a nightmare for passengers if couple of kids are uncontrollable. After receiving many complaints from its passengers who are disturbed by kids in flight, Malaysian Airlines decided to ban kids under 12 in their regular Economy class cabins of new Airbus A380s. Parents with under 12 years old kids are allowed only in to special kids zone created in the basement of  the multi-storied jumbo flights Airbus A380s. May be parents with under 12 kids does not appreciate this move, but the rest of travellers would be happy. Back in June 2011 Malaysian Airlines banned infants in first class of its Boeing 747-400 jets. The CEO of Malaysian Airlines defended on twitter about the decision as first classers spend pricy amount for a comfortable journey.  So if you are a parent of kids  under 12, think twice before you book tickets on Malaysian Airlines. Creative commons image courtesy: flickr/transworld

    Read the article

  • Work Item Keyboard Shortcuts, Resolving Mercurial Work Items, WikiPlex 2.0

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed the latest version of the CodePlex software yesterday. Keyboard Shortcuts With this release, we have added a set of keyboard shortcuts for common tasks in the Issue Tracker.  This feature is a popular request in the CodePlex Issue Tracker.  The CodePlex team visits the issue tracker frequently when researching and considering new features.  If you haven’t visited it recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.   To view the available shortcuts, type ? from any page within the issue tracker to see this help dialog: You can see what each shortcut invokes below: Please give us feedback on this feature and let us know what additional shortcuts would be useful. Resolve Work Items When Pushing Mercurial Changes Another feature we added is the ability to resolve work items when push changes to your Mercurial repository, which has been available to our TFS / SVN users for quite some time. The required format is identical to the SVN format listed here. When committing your changes locally, add "Work Items: Id, AnotherId" to your commit message. When you push, CodePlex will detect this comment, add a commit message, and resolve the work item. WikiPlex Goes 2.0! CodePlex continues to improve WikiPlex, our open source wiki engine.  Wikiplex hit another major milestone today with the release of version 2.0!  We have added several new features, including:  interleaving ordered and unordered lists, specifying the height and width for images, a multi-line indentation macro, and a restructuring of some of the API. Visit Matt's announcement for more information on the release or grab the binaries via NuGet or CodePlex.

    Read the article

  • Hallmarks of a Professional PHP Programmer

    - by Scotty C.
    I'm a 19 year old student who really REALLY enjoys programming, and I'm hoping to glean from your years of experience here. At present, I'm studying PHP every chance I get, and have been for about 3 years, although I've never taken any formal classes. I'd love to some day be a programmer full time, and make a good career of it. My question to you is this: What do you consider to be the hallmarks or traits of a professional programmer? Mainly in the field of PHP, but other, more generalized qualifications are also more than welcome, as I think PHP is more of a hobbyist language and may not be the language of choice in the eyes of potential employers. Please correct me if I'm wrong. Above all, I don't want to wast time on something that isn't worth while. I'm currently feeling pretty confident in my knowledge of PHP as a language, and I know that I could build just about anything I need and have it "work", but I feel sorely lacking in design concepts and code structure. I can even write object oriented code, but in my personal opinion, that isn't worth a hill of beans if it isn't organized well. For this reason, I bought Matt Zandstra's book "PHP Objects, Patterns, and Practice" and have been reading that a little every day. Anyway, I'm starting to digress a little here, so back to the original question. What advice would you give to an aspiring programmer who wants to make an impact in this field? Also, on a side note, I've been working on a project with a friend of mine that would give a fairly good idea of where I'm at coding wise. I'm gonna give a link, I don't want anyone to feel as though I'm pushing or spamming here, so don't click it if you don't want to. But if you are interested on giving some feedback there as well, you can see the code on github. I'm known as The Craw there. https://github.com/PureChat/PureChat--Beta-/tree/

    Read the article

  • What type of career path / jobs for a developer to have best work life balance?

    - by programmx10
    I know some people may look down on a question like this but I've been thinking lately a lot about what direction I can take my career to have a good work life balance, since I have been working for a startup where hours tend to drag on, etc and I find it often drains the life out of me. I have been going to interviews and some other companies are also startups / new companies and seem to have a similar attitude about working long hours. Maybe its the technologies I use, the type of development, I don't know but I'm curious if anyone can offer advice on what a path is to be a programmer / developer but work for a company that respects a regular work week and would only rarely find the need to move past this. I realize this won't lead to being the highest paid in my field but I'm ok with that and feel the tradeoff would be worth it as it would also give me time for my own projects, etc. I know some people may say this is too general but I believe it is a programmer specific question because I believe there tends to be a higher than average rate of working overtime, etc and people working in "startup" venture situations than in many other fields and there is definitely a mindset among a lot of people in the field of working long hours that doesn't exist in every industry.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-01

    - by Bob Rhubart
    Complexity of Social Computing - Is it a Consideration for EA's? | Pat Shepherd blogs.oracle.com Pat Shepherd asks, "Does Enterprise Architecture need to consider Social Computing in its scope?" Who should own the Enterprise Architecture? | Michael Glas blogs.oracle.com "Instead of looking at just who owns the architecture," suggests Michael Glas, "think about what the person/role/organization should do." The Application Architecture Domain | Michael Glas blogs.oracle.com Michael Glas asks—and answers: "As an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?" CAP Twelve Years Later: How the "Rules" Have Changed | Eric Brewer www.infoq.com The CAP theorem asserts that any net­worked shared-data system can have only two of three desirable properties. How­ever, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. Oracle DB with OEM in Amazon Cloud | Dr. Frank Munz www.munzandmore.com Dr. Frank Munz shares a video that screencast that explains "how to create an Oracle DB instance in AWS, how to enable OEM...and how to connect to your cloud instance with a local installation of NetBeans." Sample External Login.jsp page for Oracle Access Manager 11g | Brian Eidelman fusionsecurity.blogspot.com A-Team blogger Brian Eidelman expands on a previous post dealing with configuring OAM 11g to use an externally hosted custom login page. Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 coherence.oracle.com Date: Thursday, June 7, 2012 Time: 5:30pm – 9:00pm PT Where: Oracle Conference Center Room # 103 350 Oracle Parkway Redwood, Shores, CA Presentations: 6:00 p.m. - Coherence 101, The Evolution of Distributed Caching - Noah Arliss (Oracle) 7:00 p.m. - Optimizing Performance for Oracle Coherence and TopLink Grid at OOCL - Matt Rosen, Leo Limqueco (OOCL) 8:00 p.m. - Oracle Coherence Message Bus - Extreme Performance on Oracle Exalogic - Ballav Bihani (Oracle) Thought for the Day "I can't be left unsupervised." — Ron Wood (Born 06/01/1947 Source: Brainy Quote

    Read the article

  • Monitoring remote laptops

    - by kaerast
    We're looking for something to monitor around 30 remote laptops that are constantly out on the road, never returning to base except for when there are serious hardware faults that need repairing. These laptops won't always be connected to the internet, they'll have mobile broadband and may work offline most of the time. They will be running a mixture of Windows XP, Vista and 7 and there is currently no server setup. We're primarily interested in making sure that Windows Updates and antivirus updates are happening, and I guess we should also be monitoring remaining disk space, what software is installed and ideally hardware health. It might also be nice if we could gain remote access to perform work on them. My main reason for wanting to monitor them is that it's going to be a real pain to get them back to base if anything goes wrong, so I want to be proactive in ensuring they last as long as possible. Can you recommend what I should be monitoring to ensure a long life? What tools would you use to monitor and maintain these computers?

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Design pattern for client/server sessions?

    - by nonot1
    Are there any common patterns or general guidance I can learn from for how to design a client/server system where the both the client and server must maintain some kind per-client session state? I've found any number of libraries that can help with some of the plumbing, but it's the overall design I'm wondering about. Open issues in my mind: How to structure the client/server communication so that bidirectional synchronous and asynchronous requests are possible? The server side needs to spawn a couple of per-connected-client session-long helper process. How to manage that? How to manage the mapping from a given client (and any of it's requests) to server state and helper process instances in the face of multiple clients and intermittent network connectivity. Most communication can be simple blocking request/reply, but some will be long running processing tasks that the client will want to keep tabs on. To the extent that it matters, the platform is Linux/C/C++. Not web based. Just an existing thick-client software app being modified to talk to backend servers for some tasks.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • What expectation should I have of South African web development rates / duration? [closed]

    - by Warren van Rooyen
    I am a developer but only intend on doing the front-end work for getting Reddit-like upvote / downvote functionality going on an upcoming site I'm building. I have never had to contract a developer for back-end work to implement code for me so I am quite in the dark on how much I should expect to pay and how long it could take to get the site going. I could be taken for a ride as the developer could distort the time it would take at a seemingly regular rate (hourly/day) or could otherwise distort their rate. Please could you give me help on this. I know you need some guidance on the nature of the site so here it is. I have a Reddit type template with CSS and PHP included. I then downloaded Pligg code that's intended to do the job of the Reddit upvote downvote functionality. How long would a developer roughly need to unite the theme and front end with the back-end functionality? I do understand it's not a lot of info but I'm sure you're experienced enough to have an instinct for the size of the project. Also, should I work on an hourly/day rate/ project payment agreement?

    Read the article

  • A Look Inside JSR 360 - CLDC 8

    - by Roger Brinkley
    If you didn't notice during JavaOne the Java Micro Edition took a major step forward in its consolidation with Java Standard Edition when JSR 360 was proposed to the JCP community. Over the last couple of years there has been a focus to move Java ME back in line with it's big brother Java SE. We see evidence of this in JCP itself which just recently merged the ME and SE/EE Executive Committees into a single Java Executive Committee. But just before that occurred JSR 360 was proposed and approved for development on October 29. So let's take a look at what changes are now being proposed. In a way JSR 360 is returning back to the original roots of Java ME when it was first introduced. It was indeed a subset of the JDK 4 language, but as Java progressed many of the language changes were not implemented in the Java ME. Back then the tradeoff was still a functionality, footprint trade off but the major market was feature phones. Today the market has changed and CLDC, while it will still target feature phones, will have it primary emphasis on embedded devices like wireless modules, smart meters, health care monitoring and other M2M devices. The major changes will come in three areas: language feature changes, library changes, and consolidating the Generic Connection Framework.  There have been three Java SE versions that have been implemented since JavaME was first developed so the language feature changes can be divided into changes that came in JDK 5 and those in JDK 7, which mostly consist of the project Coin changes. There were no language changes in JDK 6 but the changes from JDK 5 are: Assertions - Assertions enable you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. In the example code below if the interval isn't between 0 and and 1,00 the an error of "Invalid value?" would be thrown. private void setInterval(int interval) { assert interval > 0 && interval <= 1000 : "Invalid value?" } Generics - Generics add stability to your code by making more of your bugs detectable at compile time. Code that uses generics has many benefits over non-generic code with: Stronger type checks at compile time. Elimination of casts. Enabling programming to implement generic algorithms. Enhanced for Loop - the enhanced for loop allows you to iterate through a collection without having to create an Iterator or without having to calculate beginning and end conditions for a counter variable. The enhanced for loop is the easiest of the new features to immediately incorporate in your code. In this tip you will see how the enhanced for loop replaces more traditional ways of sequentially accessing elements in a collection. void processList(Vector<string> list) { for (String item : list) { ... Autoboxing/Unboxing - This facility eliminates the drudgery of manual conversion between primitive types, such as int and wrapper types, such as Integer.  Hashtable<Integer, string=""> data = new Hashtable<>(); void add(int id, String value) { data.put(id, value); } Enumeration - Prior to JDK 5 enumerations were not typesafe, had no namespace, were brittle because they were compile time constants, and provided no informative print values. JDK 5 added support for enumerated types as a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type. enum Season {WINTER, SPRING, SUMMER, FALL}; } private Season season; void setSeason(Season newSeason) { season = newSeason; } Varargs - Varargs eliminates the need for manually boxing up argument lists into an array when invoking methods that accept variable-length argument lists. The three periods after the final parameter's type indicate that the final argument may be passed as an array or as a sequence of arguments. Varargs can be used only in the final argument position. void warning(String format, String... parameters) { .. for(String p : parameters) { ...process(p);... } ... } Static Imports -The static import construct allows unqualified access to static members without inheriting from the type containing the static members. Instead, the program imports the members either individually or en masse. Once the static members have been imported, they may be used without qualification. The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification. import static data.Constants.RATIO; ... double r = Math.cos(RATIO * theta); Annotations - Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. There are a number of uses for annotations including information for the compiler, compiler-time and deployment-time processing, and run-time processing. They can be applied to a program's declarations of classes, fields, methods, and other program elements. @Deprecated public void clear(); The language changes from JDK 7 are little more familiar as they are mostly the changes from Project Coin: String in switch - Hey it only took us 18 years but the String class can be used in the expression of a switch statement. Fortunately for us it won't take that long for JavaME to adopt it. switch (arg) { case "-data": ... case "-out": ... Binary integral literals and underscores in numeric literals - Largely for readability, the integral types (byte, short, int, and long) can also be expressed using the binary number system. and any number of underscore characters (_) can appear anywhere between digits in a numerical literal. byte flags = 0b01001111; long mask = 0xfff0_ff08_4fff_0fffl; Multi-catch and more precise rethrow - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. catch (IOException | InterruptedException ex) { logger.log(ex); throw ex; } Type Inference for Generic Instance Creation - Otherwise known as the diamond operator, the type arguments required to invoke the constructor of a generic class can be replaced with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context.  map = new Hashtable<>(); Try-with-resource statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement.  try (DataInputStream is = new DataInputStream(...)) { return is.readDouble(); } Simplified varargs method invocation - The Java compiler generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduced a compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings. On the library side there are new features that will be added to satisfy the language requirements above and some to improve the currently available set of APIs.  The library changes include: Collections update - New Collection, List, Set and Map, Iterable and Iteratator as well as implementations including Hashtable and Vector. Most of the work is too support generics String - New StringBuilder and CharSequence as well as a Stirng formatter. The javac compiler  now uses the the StringBuilder instead of String Buffer. Since StringBuilder is synchronized there is a performance increase which has necessitated the wahat String constructor works. Comparable interface - The comparable interface works with Collections, making it easier to reuse. Try with resources - Closeable and AutoCloseable Annotations - While support for Annotations is provided it will only be a compile time support. SuppressWarnings, Deprecated, Override NIO - There is a subset of NIO Buffer that have been in use on the of the graphics packages and needs to be pulled in and also support for NIO File IO subset. Platform extensibility via Service Providers (ServiceLoader) - ServiceLoader interface dos late bindings of interface to existing implementations. It helpe to package an interface and behavior of the implementation at a later point in time.Provider classes must have a zero-argument constructor so that they can be instantiated during loading. They are located and instantiated on demand and are identified via a provider-configuration file in the METAINF/services resource directory. This is a mechansim from Java SE. import com.XYZ.ServiceA; ServiceLoader<ServiceA> sl1= new ServiceLoader(ServiceA.class); Resources: META-INF/services/com.XYZ.ServiceA: ServiceAProvider1 ServiceAProvider2 ServiceAProvider3 META-INF/services/ServiceB: ServiceBProvider1 ServiceBProvider2 From JSR - I would rather use this list I think The Generic Connection Framework (GCF) was previously specified in a number of different JSRs including CLDC, MIDP, CDC 1.2, and JSR 197. JSR 360 represents a rare opportunity to consolidated and reintegrate parts that were duplicated in other specifications into a single specification, upgrade the APIs as well provide new functionality. The proposal is to specify a combined GCF specification that can be used with Java ME or Java SE and be backwards compatible with previous implementations. Because of size limitations as well as the complexity of the some features like InvokeDynamic and Unicode 6 will not be included. Additionally, any language or library changes in JDK 8 will be not be included. On the upside, with all the changes being made, backwards compatibility will still be maintained. JSR 360 is a major step forward for Java ME in terms of platform modernization, language alignment, and embedded support. If you're interested in following the progress of this JSR see the JSR's java.net project for details of the email lists, discussions groups.

    Read the article

  • Slow to sync music files?

    - by pst007x
    I have created a folder in the Ubuntu One sync folder and called it music. I have added various albums and the folder has started to sync. However the files were added back in October and still only the folders have synced and no music files. I tested this service before and added a single music file directly into the Ubuntu One folder (no sub folders) and within a few days it synced, however it seems anything in sub folders seem to stall or take a very long time. My ubuntu One program always says syncing and the progression bar creeps, but still no files synced. I know there are issues with speed but a month and going to sync? I recently tested the same files with Dropbox and it took 9 hours. I have port forwarded the https (443) port both in the software firewall and in my router, I tried disabling both firewalls too, either way it makes no difference. I have also tried both from home and the office on different Ubuntu systems. Is there anything anyone has done to improve this service? I am trying to integrate Ubuntu One service into the office to share project files but the syncing is taking to long. I am using the latest Ubuntu 10.10 (fully updated, fresh install), I love Ubuntu and wish to continue to support it anyway I can, so a solution would be good :-) Any help would be appreciated. Thanks Paul

    Read the article

  • Collision Resolution

    - by ultifinitus
    Hey all, I'm making a simple side-scrolling game, and I would appreciate some input! My collision detection system is a simple bounding box detection, so it's really easy to implement. However my collision resolution is ridiculous! Currently I have a little formula like this: if (colliding(firstObject,secondObject)) firstObject.resolve_collision(yAxisOffset); if (colliding(firstObject,secondObject)) firstObject.resolve_collision(xAxisOffset); where yAxisOffset is only set if the first object's previous y position was outside the second object's collision frame, respectively xAxisOffset as well. Now this is working great, in general. However there is a single problem. When I have a stack of objects and I push the first object against that stack, the first object get's "stuck," on the stack. What's I think is happening is the object's collision system checks and resolves for collisions based on creation time, so If I check one axis, then the other, the object will "sink" object directly along the checking axis. This sinking action causes the collision detection routine to think there's a gap between our position and the other object's position, and when I finally check the object that I've already sunk into, my object's position is resolved to it's original position... All this is great, and I'm sure if I bang my head against a wall long enough i'll come up with a working algorithm, but I'd rather not =). So what in the heck do you think I should do? How could I change my collision resolution system to fix this? Here's the program (temporary link, not sure how long it'll last) (notes: arrow keys to navigate, click to drop block, x to jump) I'd appreciate any help you can offer!

    Read the article

  • 50 Years After The Jetsons

    - by Jason Fitzpatrick
    The Jetsons, the future-oriented animated cartoon series from the 1960s, turned 50 this week. The Smithsonian takes a look at what the show meant, then and now. At the Smithsonian blog Paleofuture, Matt Novak looks back at the last 50 years and the impact that The Jetsons had. He writes: It’s important to remember that today’s political, social and business leaders were pretty much watching ”The Jetsons” on repeat during their most impressionable years. People are often shocked to learn that “The Jetsons” lasted just one season during its original run in 1962-63 and wasn’t revived until 1985. Essentially every kid in America (and many internationally) saw the series on constant repeat during Saturday morning cartoons throughout the 1960s, ’70s and ’80s. Everyone (including my own mom) seems to ask me, “How could it have been around for only 24 episodes? Did I really just watch those same episodes over and over again?” Yes, yes you did. But it’s just a cartoon, right? So what if today’s political and social elite saw ”The Jetsons” a lot? Thanks in large part to the Jetsons, there’s a sense of betrayal that is pervasive in American culture today about the future that never arrived. We’re all familiar with the rallying cries of the angry retrofuturist: Where’s my jetpack!?! Where’s my flying car!?! Where’s my robot maid?!? “The Jetsons” and everything they represented were seen by so many not as a possible future, but a promise of one. Hit up the link below for the full article–prepare to be surprised at just how few episodes of the show were ever animated and aired. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >