Search Results

Search found 7660 results on 307 pages for 'high'.

Page 63/307 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Google Loon–A network of balloons to provide internet to everyone

    - by Gopinath
    Google once just a super powerful search engine provider and now they are venturing in to a lot of interesting non software projects like self driving cars, glasses that beam information right on to your eye balls, high speed internet services @ 1 Giga bytes per second. A recent addition to this innovative list is Google Loon – a network of flying balloons that provide internet access to remote parts of the world where it is not feasible for many governments/corporate to provider internet services. Google says there are several billions of people around the world who don’t have access to internet and Google Loon aim is to provide internet facilities to all these people. A pilot project is started couple of days ago by launching 30 balloons into stratosphere from New Zealand. These balloons fly 20 Kilometers above earth(much higher than where aero planes fly) and they beam internet to homes having Loon receiver wirelessly. Checkout the embedded introductory video on Google Loon What is in it for Google? Why is Google getting into these type of projects and what is in it for them? Google is the gateway to web and majority of people find information on web using Google Services/Software. So providing internet facilities to more people means, more people using Google services and it in turn contributes to their revenue growth. Google is not a charity, they do all these projects to earn money just like every other corporate. The best part is while earning money they are touching lives of billions of people in a positive way. Just imagine everyone in the world connected and have ability to take informed decisions irrespective of whether they live in developed countries or underprivileged parts of the world! Wow that will be a beautiful day. Further reading Google Loon website Google unveils its Project Loon Wi-Fi balloons – in pictures Google flies Internet balloons in stratosphere for a “network in the sky” How Google Will Use High-Flying Balloons to Deliver Internet to the Hinterlands Good discussion on Google Loon at Hacker News community

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • ATI Catalyst driver 12.8 is not using hardware acceleration on Precise

    - by Jack Wright
    I've been using Ubuntu and ATI Catalyst for years. On my clean install of Ubuntu 12.04 I've noticed that Catalyst 12.6 and then 12.8 are not actually using my HD5750 GPU for hardware acceleration - high CPU usage, zero GPU load. Everything installed correctly with no hassles, fglrxinfo and vainfo are correct as per this HowTo for Precise. I have an Ubuntu 10.04 with Catalyst 12.6 installation on the same hardware which does use the GPU - low CPU usage, high GPU load when transcodeing video files or playing video content. The VA-API drivers are not installed on the 10.04 build. They are not mentioned in this HowTo for Lucid. fgl_glxgears frame rates on Precise are a fifth of the rates on Lucid. LUCID jw@Kworld:~$ fgl_glxgears Using GLX_SGIX_pbuffer 16867 frames in 5.0 seconds = 3373.400 FPS 12523 frames in 5.0 seconds = 2504.600 FPS 13763 frames in 5.0 seconds = 2752.600 FPS PRECISE jw@NewWorld12:~$ fgl_glxgears Using GLX_SGIX_pbuffer 12905 frames in 5.0 seconds = 2581.000 FPS 3230 frames in 5.0 seconds = 646.000 FPS 517 frames in 5.0 seconds = 103.400 FPS 518 frames in 5.0 seconds = 103.600 FPS 6489 frames in 5.0 seconds = 1297.800 FPS This is glxgears running in fullscreen. In Lucid (10.04) I can't see the gears, they are spinning so fast, but in Precise (12.04) they are really sluggish. Has anyone else noticed a problem like this? Cheers, Jack.

    Read the article

  • links for 2011-02-14

    - by Bob Rhubart
    Glenn Fawcett: Solaris Eye for the Linux Guy, or how I learned to stop worrying about Linux and Love Solaris (Part 1) Glenn says: "This entry goes out to my Oracle techie friends that have been in the Linux camp for sometime now and are suddenly finding themselves needing to know more about Solaris… hmmmm… I wonder if this has anything to do with Solaris now being an available option with Exadata?"  (tags: linux solaris oracle) Enterprise Software Development with Java: High Performance JPA with GlassFish and Coherence - Part 2 Oracle ACE Director Markus Eisele describes "the steps you have to take to configure a JPA backed Cache with Coherence and how you could use it from within GlassFish as a high performance data store." (tags: oracle otn oracleace java glassfish coherence) TOGAF a Registered Trademark and Surpasses 15k Certifications EA Blogs Mike Walker relays news on the TOGAF standard. (tags: entarch togaf) Weblogic or wait? | Capping IT Off | Capgemini "So when would you move over to the new Oracle Technology?" asks Arjan Kramer. " Well, as always there can be several reasons..." (tags: oracle capgemini weblogic) Random Monday Thoughs (Art of SOA Governance) "Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure." - Terry Goldman (tags: oracle otn soa soagovernance)

    Read the article

  • Architectural Composition Languages

    - by C. Lawrence Wenham
    Recently stumbled upon this paper (PDF) talking about ACLs, or Architectural Composition Languages. They're a fusion of two earlier lines of research: Architectural Definition Languages (such as UML) and Object Composition Languages (such as XAML, WWF, or scripting languages). The goal of an ACL is to have a high-level description of a program's architecture which can also be compiled into a runnable program. The high-level description assists automated analysis, while the 'executability' means changes can be tested immediately. You would still author the components of the program in a conventional programming language (C, Java, Python, etc), but they would be composed into a complete program by the ACL. One of the expected benefits is that a program can be ported to a different platform by swapping in "similar but different" components. I've been hankering for something like this for a long time (see this answer I gave on a StackOverflow question a few years ago). The paper mentions that the researchers were working on a language called ACL/1 that initially targeted Java, but would be ported to support .Net as well. However, I can't find any more mention of ACL/1 anywhere. Has there been any more work done on this? Are there any other implementations of the ACL concept that are available for use or experimentation?

    Read the article

  • Oracle JDK 7u10 released with new security features

    - by Henrik Stahl
    A few days ago, we released JRE and JDK 7 update 10. This release adds support for the following new platforms: Windows 8 on x86-64. Note that Modern UI (aka Metro) mode is not supported. Internet Explorer 10 on Windows 8. Mac OS X 10.8 (Mountain Lion) This release also introduces new features that provide enhanced security for Java applet and webstart applications, specifically: The Java runtime tracks if it is updated to the latest security baseline. If you try to execute an unsigned applet with an outdated version of Java, a warning dialog will prompt you to update before running the applet. The Java runtime includes a hardcoded best before date. It is assumed that a new version will be released before this date. If the client has not been able to check for an update prior to this date, the Java runtime will assume that it is insecure and start warning the user prior to executing any applets. The Java control panel now includes an option to set the desired security level on a low-medium-high-very high scale, as well as an option to disable Java applets and webstart entirely. This level controls things such as if the Java runtime is allowed to execute unsigned code, and if so what type of warning will be displayed to the user. More details on the security settings can be found in the documentation. See below for a sample screenshot. The new update of the JRE and the JDK are available via OTN. To learn more about the release please visit the release notes.

    Read the article

  • How do I prevent other dynamic bodies from affecting the player's velocity with Box2D?

    - by Milo
    I'm working on my player object for my game. PhysicsBodyDef def; def.fixedRotation = true; def.density = 1.0f; def.position = Vec2(200.0f, 200.0f); def.isDynamic = true; def.size = Vec2(50.0f,200.0f); m_player.init(def,&m_physicsEngine.getWorld()); This is how he moves: b2Vec2 vel = getBody()->GetLinearVelocity(); float desiredVel = 0; if (m_keys[ALLEGRO_KEY_A] || m_keys[ALLEGRO_KEY_LEFT]) { desiredVel = -5; } else if (m_keys[ALLEGRO_KEY_D] || m_keys[ALLEGRO_KEY_RIGHT]) { desiredVel = 5; } else { desiredVel = 0; } float velChange = desiredVel - vel.x; float impulse = getBody()->GetMass() * velChange; //disregard time factor getBody()->ApplyLinearImpulse( b2Vec2(impulse,0), getBody()->GetWorldCenter(),true); This creates a few problems. First, to move the player at a constant speed he must be given a high velocity. The problem with this is if he just comes in contact with a small box, he makes it move a lot. Now, I can fix this by lowering his density, but then comes my main issue: I need other objects to be able to run into him, but when they do, he should be like a static wall and not move. I'm not sure how to do that without high density. I cannot use collision groups since I still need him to be solid toward other dynamic things. How can this be done? Essentially, how do I prevent other dynamic bodies from affecting the player's velocity?

    Read the article

  • Distributing an Android game with plugins via the market

    - by Peter Serwylo
    I'm new to Android development, and was wondering how the following could be achieved within the confines of the Android market as a distribution channel: One main application, which handles the main menu, networking, high scores, etc. Several games which can be launched from the main menu, which all work within the same eco system. The main application is not just a pseudo launcher for other games, these different games will share high scores and other achievements/preferences. In a traditional package management system such as apt, pacman or yum, this could be handled quite happily through dependencies. This does not appear to be possible via the Android market. The closest I've seen is when apps scan to check if the required app is installed, and if not, launches the market and asks the user to download the app. This sounds like a very messy solution. It also begs the question, would they download the game (plugin) first, which then downloads the main shell application? Or would they download the main shell application, and when they navigate to a menu item which says "Play game", then it scans for any installed games, and if none exist, redirects to the market? Also, I'm not even sure if it is possible to dig up the package from another application on the device, and start invoking classes from within (e.g. when you want to launch the game (plugin)) A final option is just to have a 3rd component which is a .jar that each game includes, which effectively contains the entire shell application. Then each game would appear to have the same menu, but it would become a nightmare as soon as you want to update the menu component and have to re-release each game. It would be especially worse if other people released games (plugins) based on the same framework and didn't update them. Is there any other options which I haven't thought of? Has anyone else solved this or seen a solution in any apps they've installed (doesn't have to be games)? cheers.

    Read the article

  • Good 2D Platformer Physics

    - by Joe Wreschnig
    I have a basic character controller set up for a 2D platformer with Box2D, and I'm starting to tweak it to try to make it feel good. Physics engines have a lot of knobs to tweak, and it's not clear to me, writing with a physics engine for the first time, which ones I should use. Should jumping apply a force for several ticks? An impulse? Directly set velocity? How do I stop the avatar from sticking to walls without taking away all its friction (or do I take away all the friction, but only in the air)? Should I model the character as a capsule? A box with rounded corners? A box with two wheels? Just one big wheel? I feel like someone must have done this before! There seem to be very few resources available on the web that are not "baby's first physics", which all cut off where I'm hoping someone has already solved the issues. Most examples of physics engines for platformers have floaty-feeling controls, or in-air jumps, or easily exploitable behavior when temporary penetration is too high, etc. Some examples of what I mean: A short tap of jump jumps a short distance; a long tap jumps higher. Short skidding when stopping or reversing directions at high velocity. Standing stably on inclines (but maybe sliding down them when ducking). Analog speed when using an analog controller. All the other things that separate good platformers from bad platformers. Dare I suggest, stable moving platforms? I'm not really looking for "hey, do this." Obviously, the right thing to do is dependent on what I want in the game. But I'm hoping someone somewhere has gone through the possibilities and said "well technique A does feature X well, technique B does Y well, but that doesn't work with C", or has some worked examples beyond "if (key == space) character.impulse(0, 1)"

    Read the article

  • Higher resolutions unavailable with 2.6.38-8 kernel

    - by time-wastrel
    After upgrading to Natty and the 2.6.38-8 kernel I could no longer obtain the 1920x1080 resolution available in Maverick with 2.6.35-22. In fact the boot occasionally hung. However, after selecting the remaining 2.6.35-22 kernel, the high resolution was available. I then made the mistake of completely reinstalling, but could never get the higher resolutions with 2.6.38-8, no matter what I did. e.g. trying the nvidia proprietary driver, creating an xorg.conf. Even from the command line using xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync xrandr --addmode DVI-I-1 1920x1080_60.00 xrandr --output DVI-I-1 --mode 1920x1080_60.00 all I would get would be a blank screen and the monitor reporting no input. (Low resolutions displayed fine with xrandr.) I could actually add the 1920x1080 to the already existing probe-reported resolutions in the pool of available resolutions, but choosing it would give the blank screen. In the end I installed the 2.6.35-22 kernel again. The high resolution is back. For a while there, staying up all night and trying many things, I thought that a new video board or monitor might be needed, but deep down, I knew that they were both OK. My question is - "Is this some bug involving the latest kernel, that will go away, or if this persists in future kernels, is there a way to make sure that I can keep my native resolution?"

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Becoming a Certified Information Professional

    - by Lance Shaw
    Yesterday, we participated with AIIM in a webinar about the Certified Information Professional (CIP) program that they are now offering.  The interest level is very high in the program, as evidenced by the high turnout at the event. You might be asking yourself, what does the Oracle WebCenter team care about an AIIM certification program? Well, we sponsored this program because we consistently find that the more educated our customers and prospects are, the more value they are going to get out of the technology we provide.  As an ECM vendor, we provide plenty of WebCenter product training and certifications to help you make the most of WebCenter technology. While these are essential and valuable, technologists that also have an operational command of the business and the various impacts that the flow of information can have are even more valuable to an organization. Thinking about the management of content and information and its effect on business process can have wide-ranging benefits, not only to your company but to your personal bottom line.  And let's be honest, a customer who is looking holistically at how content is managed is going to see more opportunities to leverage that content and in many cases, this will motivate the purchase of additional product licenses.   Now if you are regretting the fact that you missed the webinar yesterday, never fear!  It is now available for playback and you can view it at your convenience by visiting the AIIM website. We hope you find it informative and that you can personally profit from being able to showcase your certification as an Information Professional. Additionally, we hope it will help you identify additional opportunities to leverage Oracle WebCenter in order to further reduce your operational costs and drive your business forward.

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • What is the the maximum time for a user to return to Google for the visit to be flagged up as a bounce in GA?

    - by Anonymous
    I know that Google measures bounce rates by how fast a user returns to the results page after clicking-through to a website. Roughly what is the maximum duration of the visit for the user to then return for it to be considered a bounce? i.e. <5 seconds, <30 seconds? I'm mainly interested as it appears a lot of users clicking through my PPC adverts (Adwords) are bouncing, despite my ads having a high quality score and the page's being entirely related to the adverts copy and at as best tied to what I think user's may be searching for from the key phrases I've selected so the high bounce rate (100% on some keywords) seems a bit strange. If a bounce isn't determined by time, but simply whether a user returns to the SERP after visiting my site or not after any amount of time that would make more sense but the average duration of visit for my keywords with a 100% bounce rate in GA is 00:00:00, which suggests a user immediately returned to the SERPs, which again, is odd. Is my GA data being skewed by https or anything like that? Scratching my head here.

    Read the article

  • System speakers not recognized

    - by Kyle Maxwell
    Since upgrading to Xubuntu 13.10, sound has not functioned properly (e.g. screeching when playing Skype notifications). Now, however, it does not function at all. pavucontrol only shows Dummy Output and does not recognize the built-in speakers on my Dell Precision M4600. Possibly related, the sound indicator applet does not come up when I click on it, only showing a small white bar underneath it. I have purged and reinstalled pulseaudio. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) Subsystem: Dell Precision M4600 Flags: bus master, fast devsel, latency 0, IRQ 56 Memory at f2560000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 01:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 14a3 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f0080000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel The "Capabilities: <access denied" line makes me wonder if there's a permissions issue, as the Log Out applet now shows "Restart" and "Shutdown" grayed out. groups shows me in: kmaxwell adm dialout cdrom sudo dip plugdev fuse lpadmin netdev sambashare vboxusers

    Read the article

  • Lifecycle of an ASP.NET MVC 5 Application

    Here you can download a PDF Document that charts the lifecycle of every ASP.NET MVC 5 application, from receiving the HTTP request to sending the HTTP response back to the client. It is designed both as an educational tool for those who are new to ASP.NET MVC and also as a reference for those who need to drill into specific aspects of the application. The PDF document has the following features: Relevant HttpApplication stages to help you understand where MVC integrates into the ASP.NET application lifecycle. A high-level view of the MVC application lifecycle, where you can understand the major stages that every MVC application passes through in the request processing pipeline. A detail view that shows drills down into the details of the request processing pipeline. You can compare the high-level view and the detail view to see how the lifecycles details are collected into the various stages. Placement and purpose of all overridable methods on the Controller object in the request processing pipeline. You may or may not have the need to override any one method, but it is important for you to understand their role in the application lifecycle so that you can write code at the appropriate life cycle stage for the effect you intend. Blown-up diagrams showing how each of the filter types (authentication, authorization, action, and result) is invoked. Link to a useful article or blog from each point of interest in the detail view. span.fullpost {display:none;}

    Read the article

  • How do I find out which process is eating up my bandwidth?

    - by Bruce Connor
    I think I'm being the victim of a bug here. Sometimes while I'm working (I still don't know why), my network traffic goes up to 200 KB/s and stays that way, even tough I'm not doing anything internet-related. This sometimes happens to me with the CPU usage. When it does, I just run a top command to find out which process is responsible and then kill it. Problem is: I have no way of knowing which process is responsible for my high network usage. Both the resource monitor and the top command only tell me my total network usage, neither of them tells me process specific network info. Is there another command I can use to find out which process is getting out of hand? I've already tried killing all the obvious ones (firefox, update-manager, pidgin, etc) with no luck. So far, restarting the machine is the only way I found of getting rid of the issue. EDIT: (just to be clear) I've found questions here about monitoring total bandwidth usage, but, as I mentioned, that's not what I need. UPDATE: The command iftop gives results that disagree entirely with the information reported by System Monitor. While the latter claims there's high network traffic, the former claims there's barely 1 KB/s. Thanks

    Read the article

  • Another good free utility - Campwood Software Source Monitor

    - by TATWORTH
    The Campwoood Source Monitor at http://www.campwoodsw.com/sourcemonitor.html  says in its introduction "The freeware program SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. SourceMonitor, written in C++, runs through your code at high speed, typically at least 10,000 lines of code per second." It is indeed very high-speed and is useful as it: Collects metrics in a fast, single pass through source files. Measures metrics for source code written in C++, C, C#, VB.NET, Java, Delphi, Visual Basic (VB6) or HTML. Includes method and function level metrics for C++, C, C#, VB.NET, Java, and Delphi. Offers Modified Complexity metric option. Saves metrics in checkpoints for comparison during software development projects. Displays and prints metrics in tables and charts, including Kiviat diagrams. Operates within a standard Windows GUI or inside your scripts using XML command files. Exports metrics to XML or CSV (comma-separated-value) files for further processing with other tools.

    Read the article

  • Why is Desktop Unity using the global application menu?

    - by Kazade
    It was announced in another question that the desktop version of Unity will keep the global menu by default. Here are the facts: The global menu was introduced into UNE to save vertical screen space because at Netbook resolutions the vertical space is limited. On a modern desktop with a high resolution, there is ample vertical space making this unnecessary On the announcement of UNE global menus, Mark Shuttleworth himself said the following: "There are outstanding questions about the usability of a panel-hosted menu on much larger screens, where the window and the menu could be very far apart." The benefits of a global menu don't seem to carry across to a high-resolution desktop and instead seem to bring draw backs (increased mouse travel, large distance between the menu and its associated window). The other worrying factor is that applications seem to be moving away from having a menu bar, and instead of innovating on this and defining new guidelines for moving away from the menu, we are giving it prime place right at the top of the desktop. If applications continue moving away from the desktop we will have an inconsistent experience concerning where to locate application related options/tools depending on which app you are using (e.g. Chrome). Finally, the current global menu bar implementation doesn't work for all apps, and doesn't even work for all apps in the default install. This means that the default desktop implementation will be inconsistent. So, there are a bunch of reasons why moving to a global menu is a bad idea, so we need some pretty convincing arguments for why it is a good idea. What are the reasons for the global menu implementation in the desktop version of Unity?

    Read the article

  • Exalogic Elastic Cloud Software (EECS) version 2.0.1 available

    - by JuergenKress
    We are pleased to announce that as of today (May 14, 2012) the Exalogic Elastic Cloud Software (EECS) version 2.0.1 has been made Generally Available. This release is the culmination of over two and a half years of engineering effort from an extended team spanning 18 product development organizations on three continents, and is the most powerful, sophisticated and comprehensive Exalogic Elastic Cloud Software release to date. With this new EECS release, Exalogic customers now have an ideal platform for not only high-performance and mission critical applications, but for standardization and consolidation of virtually all Oracle Fusion Middleware, Fusion Applications, Application Unlimited and Oracle GBU Applications. With the release of EECS 2.0.1, Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageability and extreme performance in a fully virtualized environment. The Exalogic Elastic Cloud Software 2.0.1 release brings important new technologies to the Exalogic platform: Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageabi! lity and extreme performance in a fully virtualized environment. Support for extremely high-performance x86 server virtualization via a highly optimized version of Oracle VM 3.x. A rich, fully integrated Infrastructure-as-a-Service management system called Exalogic Control which provides graphical, command line and Java interfaces that allows Cloud Users, or external systems, to create and manage users, virtual servers, virtual storage and virtual network resources. Webcast Series: Rethink Your Business Application Deployment Strategy Redefining the CRM and E-Commerce Experience with Oracle Exalogic, 7-Jun@10am PT & On-Demand: ‘The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure’ (featuring Gartner Analysts). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic Elastic Cloud,ExaLogic,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,ExaLogic 2.0.1

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Sound card not detected in 13.04

    - by Ganessh Kumar R P
    I have a problem with my sound card. I don't have volume up or down option anywhere. In the setting -> Sound I don't have any card detected. But when I run the command sudo aplay -l, I get the following output **** List of PLAYBACK Hardware Devices **** Failed to create secure directory (/home/ganessh/.config/pulse): Permission denied card 0: MID [HDA Intel MID], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 And the command lspci -v | grep -A7 -i "audio" outputs 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at f0f20000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) (prog-if 00 [Normal decode]) -- 02:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at d3efc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 07:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300 So, I assume that the drivers are properly installed but still I don't get any option in the settings or volume control. The same card used to work well back in 2010 versions(04 and 10) Any help is appreciated. Thanks

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >