Search Results

Search found 18219 results on 729 pages for 'tips box'.

Page 573/729 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Upgrading to Ubuntu 11.04 failed

    - by Rupert
    Today Ubuntu asked me to upgrade to 11.04. The installation went completely fine until right at the end when the following packages failed: install-info ubuntu-standard The installer hung so I had to shut it down manually. Ubuntu still works fine but it says that the upgrade didn't work properly so I am hesitant to restart it until I have resolved the problem in case I can't get back in. I am running Ubuntu inside the latest version of Virtual Box and was previously running version 10.10. I have tried installing install-info manually with apt-get but I get the following error: Unhandled exception: [#<SystemStackError: stack level too deep>] /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:842:in `block in <class:Autotest>': undefined method `backtrace' for [#<SystemStackError: stack level too deep>]:Array (NoMethodError) from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `[]' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `block in hook' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `each' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `any?' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:828:in `hook' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:344:in `rescue in run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:320:in `run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/lib/autotest.rb:241:in `run' from /usr/local/ruby/lib/ruby/gems/1.9.1/gems/ZenTest-4.5.0/bin/autotest:6:in `<top (required)>' from /usr/local/ruby/bin/autotest:19:in `load' from /usr/local/ruby/bin/autotest:19:in `<main>' dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of info: info depends on install-info; however: Package install-info is not configured yet. dpkg: error processing info (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of ubuntu-standard: ubuntu-standard depends on info; however: Package info is not configured yet. dpkg: error processing ubuntu-standard (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: install-info info ubuntu-standard E: Sub-process /usr/bin/dpkg returned an error code (1) Any ideas on what I should try next?

    Read the article

  • Tweeting about Oracle Applications Usability: Points to Consider

    - by ultan o'broin
    Here are a few pointers to anyone interested in tweeting about Oracle Applications usability or user experience (UX). These are based on my own experiences and practice, and may not necessarily reflect the views of Oracle, of course (touché, see the footer). If you are an Oracle employee and tweet about our offerings, then read up and follow the corporate social media policy. For the record, I tweet under the following account names: @ultan, @localization, @gamifyOracle, and @usableapps. The last two are supposedly Oracle subject-dedicated, but I mix it up on occassion. Fill out your Twitter account profile, and add a profile picture too. Disclose your interest. Don’t leave either the profile or image blank if you want to be taken seriously (or followed by me). Don’t tweet from a locked down Twitter account, as the message cannot be circulated to anyone who doesn't follow you. Open up the account if you really want to get that UX message out. Stay on message. The usable apps website, Misha Vaughan's VoX blog, and the Oracle Applications blog are good sources of UX messages and information, but you can find many other product team, individual, and corporate-wide sources with a little bit of searching. Set up a Google Alert with pertinent related keywords to get a daily digest of new information right in your inbox. Be original about it. Add your own insight and wit to the message, were relevant. Just circulating and RTing stock headlines adds no value to your effort or to the reader, and is somewhat lazy, in my opinion. Leave room for RTing of your tweet. So, don’t max out those 140 characters. Keep it under 130 if you want to be RTed without modification (or at all-I am not a fan of modifying tweets [MT], way too much effort for the medium). Remove articles and punctuation marks and use fragments, abbreviations, and so on at will to keep the tweet short enough, but leave keywords intact, as people search on those. Follow any Fusion UX Advocates who are on Twitter too (you can search for these names), and not just Oracle employees. Don't just follow people you like or think like you, or those who you think like you or are like-minded. Take a look at who is following or being followed by other tweeters and er, follow up. Create and socialize others to use an easily remembered or typed hashtag, or use what’s already popularized (for an event or conference, for example). We used #gamifyOracle for the applications UX gamification design jam, and other popular applications UX ones are #fusionapps and #usableapps (or at least I’m trying to popularize it). But, before you start the messaging, if you want to keep a record of the hashtag traffic, then set it up with an archiving service. Twitter’s own tweet lifespan is short. Don't mix up hashtags (#) with Twitter handles (@) that have the same name. Sending a tweet to @gamifyOracle will just be seen by @gamifyOracle (me) and any followers we have in common. Sending it to #gamifyOracle is seen by anyone following or searching for that hashtag. No dissing the competition. But there is no rule about not following them on Twitter to see the market reactions to Oracle announcements and this can even let you can tailor your own message accordingly. Don’t be boring. Mix it up a bit. Every 10th or so tweet, divert into other areas of interest, personal ones, even. No constant “I just received K+ in this and that” or “I just checked into wherever” on foursquare pouring into the Twittersteam, please. I just don’t care and will probably unfollow such people pretty quickly. And now, your Twitter tips and experiences with this subject? Them go in the comments...

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • Android Touch Event Collision Detection

    - by chrissb
    I'm relatively new to both Java and Android, so hopefully the problem I'm having is stemming from something pretty minor that I've overlooked. I've got a (very early stage) game that I've started working on, for Android using Java. At this stage, when the user touches the screen, if they touched a point at which there is an enemy, the enemies health is decreased and they become immobile (for the current implementation at least). The issue that I'm having is that the touch detection doesn't always seem to work. I've got a testing sprite set up that goes to the eventX and eventY coordinates of the touch down event, and it always seems to collide with the enemy object. Yet, the enemy doesn't always register as being hit, and sometimes a hit is registered when the sprite indicates the touch coordinates were outside of the enemies bounding box. I realise that this probably doesn't mean much without any code, so here's what I've got so far. Be gentle, as this is literally my first attempt at something more than basic movement etc. First off, the MainGamePanel class registers the touch event, and informs the levelmanager class (which is what I set up to monitor/handle enemies) public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN){ levelManager.handleActionDown((int)event.getX(), (int)event.getY()); targetX=event.getX(); targetY=event.getY(); } if (event.getAction() == MotionEvent.ACTION_MOVE) { //the gestures } if (event.getAction() == MotionEvent.ACTION_UP) { //touch was released } return true; } From there, in the levelmanager class the touch event is passed on to all of the enemies within a list array: public static void handleActionDown(int eventX,int eventY){ hit=false; for (enemy1 en : enemy1array){ en.handleActionDown(eventX, eventY); } } The rest of the collision code is handled within the enemies handleActionDown function: public void handleActionDown(int eventX, int eventY) { if(eventX>this.x-enemy1bitmap.getWidth() && eventX<this.x+enemy1bitmap.getWidth() && eventY>this.y-enemy1bitmap.getHeight() && eventY<this.x+enemy1bitmap.getHeight()){ takeDamage(1); levelmanager.setHit(); } } I should probably be using getWidth()/2 and getHeight()/2 for it to be more accurate, but I expanded the area to test this - although I've noticed no improvement. At this stage, the games detection over whether or not the enemy is hit is spotty at best. Generally it takes two or three attempts before a collision is successfully registered, even though the sprite that is being used for testing and set to the eventX and eventY coordinates always indicates that the collision should have worked. Hopefully someone can steer me in the right direction here, and if more information is needed, ask away! Cheers, -Chris

    Read the article

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.?

    - by Asian Angel
    When it comes to computers, Windows definitely rules the desktop in comparison to other operating systems. What we would like to know this week is if you would actually be willing to give up using Windows altogether and move to a different operating system on your computers. Note: This week’s Ask the Readers post is posing a hypothetical situation, so please refrain from starting arguments or a flame war in the comments. Good reasoned discussion is always welcome. There is no doubt that Windows is the dominant operating system in use today. Everywhere you go or look it is easy to find computers with Windows installed such as at work, home, the library, government offices, and more. For many people it is the operating system that they know and are comfortable with, which makes changing to a different operating system less appealing. Adding to the preference for Windows (or dependency based on your view) is the custom software that many businesses use on a daily basis. Throw in the high volume of people who depend on and use Microsoft Office as a standard for their business documents and it is little wonder that Windows is so dominant. So what would you use if you did decide to take a break from or permanently move away from Windows? If your choice is Linux then you have a large and wonderful variety of distributions to choose from based on what you want out of your system. Want a distribution that is easy to work with? You could choose Ubuntu, Linux Mint, or others that are engineered to be ready to go “out of the box”. Like a challenge? Perhaps Arch Linux is more your style. One of the most attractive features of all about Linux is the price…it is very hard to beat free! Maybe Mac OS X sounds like the perfect choice. It has a certain mystique and elegance associated with it and many OS X fans refuse to use anything else if given a choice. Then there is the soon to be released Chrome OS with its’ emphasis on cloud computing. This is a system that is definitely focused on being as low-maintenance and hassle-free as possible. Quick on, quick off, minimalist, and made to be portable. All of the system’s updates will occur automatically leaving you free to work and play in the cloud. But it does have its’ limitations…no installing all of those custom apps that you love using on Windows or other systems…it is literally all about the browsing window and web apps. So there you have it. If the opportunity presented itself would you, could you give Windows up and use a different operating system? Would it be easy or hard for you to do? Perhaps it would not really matter so long as you could do what you needed or wanted to do on a computer. And maybe this is the perfect time to try something new and find out…that new favorite operating system could be just an install disc away. Let us know your thoughts in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Brothers Mario – Epic Gangland Style Mario Brothers Movie Trailer [Video] Score Awesome Games on the Cheap with the Humble Indie Bundle Add a Colorful Christmas Theme to Your Windows 7 Desktop This Windows Hack Changes the Blue Screen of Death to Red Edit Images Quickly in Firefox with Pixlr Grabber Zoho Writer, Sheet, and Show Now Available in Chrome Web Store

    Read the article

  • Ubuntu Sluggish and Graphics Problem after Nvidia Driver Update

    - by iam
    I just recently started using Ubuntu (12.04) since a few weeks ago and noticed that the interface is very slow and sluggish: On Dash, I have to type the entire app name and wait a few seconds before it shows up in the search box, and a bit later before it displays search result Opening new files or applications takes also quite long and awkward Dragging icons or moving app windows around is not very spontaneous too: I have to take extra attention in moving the mouse otherwise Ubuntu would not do a correct movement or might ends up doing something incorrect instead e.g. opening the windows to full screen options or move the file to different folders, which is frustrating My PC is a few years old already (1.7 GB RAM) so this could be a reason too but when I checked in System Monitor it's hardly ever consuming much memory. Plus web-surfing on Firefox is actually lightning fast (much more than Windows), so I suspect there might be something wrong with the graphics driver (mine is GeForce 7050). I checked around System Settings and found an option to update the Nvidia driver. So I tried it and restarted, as instructed. Now, I got into a big problem upon restart... as the login-screen windows (where I have to type in the password) would take several attempts to display and finally did not manage to (it'd freeze for several seconds before there's any movement again). The background screen also kept reloading several times too and at some point the screen turned black with pixelated color strips running on the bottom 1/3 of the screen, and after a long while the background screen would come up again. Eventually I'd manage to be able to access the desktop but the launcher, top menu bar and app windows border would not disappear. I searched around and found many other people have this similar problem after updating Nvidia driver too, and on some threads the suggestion is to use "killall -u $USER" in command line (it's the only thing among various online suggestions I could do, as at that point I could not access Terminal without the launcher - Ctrl-Alt-T doesn't work for me). So I did that and was able to access the desktop correctly again with launchee/menu by creating a new account. But I would still have the same problem if logging into my original account. So I just finally tried upgrading to 12.10 and now can access my original account with fully-functional desktop - the launcher, menu and windows border are all back now. However, the problem with sluggishness still remains. And now I get scared of ever having to update the Nvidia driver again! I wonder if anyone knows what's the reason that updating the Nvidia driver is causing this problem and is there a way I can update it safely in the future? I'm still not sure how to solve the problem with the sluggishness too and not sure where else to look to find a solution.

    Read the article

  • How does rc job work / order of (contradicting) "start on ..." and "stop on ..." stanzas

    - by Binarus
    Hi, I just can't understand how Upstart's rc job definition in Natty 11.04 works. To illustrate the problem, here is the definition (empty lines and comments are left out): start on runlevel [0123456] stop on runlevel [!$RUNLEVEL] export RUNLEVEL export PREVLEVEL console output env INIT_VERBOSE task exec /etc/init.d/rc $RUNLEVEL Let's suppose we currently are in runlevel 2 and the rc job is stopped (that is exactly the situation after booting my box and logging in via SSH). Now, let's assume that the system switches to runlevel 3, for example due to a command like "telinit 3" given by root. What will happen to the rc job? Obviously, the rc job will be started since it is currently stopped and the event runlevel 3 is matching the start events. But from now on, things are unclear to me: According to the manual $RUNLEVEL evaluates to the new runlevel when the job is started (that means 3 in our example). Therefore, the next stanza "stop on runlevel [!$RUNLEVEL]" translates to "stop on runlevel [!3]"; that means we have a first stanza which will trigger the job, but the second stanza will never stop the job and seems to be useless. Since I know that the Ubuntu / Upstart people won't do useless things, I must be heavily misunderstanding something. I would be grateful for any explanation. While trying to understand this, an additional question came to my mind. If I had contradicting start and stop triggers, for example start on foo stop on foo what would happen? I swear I never will do that, but I am nevertheless very interested in how Upstart handles that on the theoretical level. Thank you very much! Editing the question as a reaction on geekosaur's first answer: I can see the parallelism, but it is not that easy (at least, not to me). Let's assume the job aurrently is still running, and a new runlevel event comes in (of course, the new runlevel is different from the current one). Then, the following should happen: 1) The job is single instance. That means that "start on ..." won't be triggered since the job is currently running; $RUNLEVEL is not touched. 2) "stop on ..." will be triggered since the new runlevel is different from $RUNLEVEL, so the job will be aborted. 3) Now, the job is stopped and waiting. I can't see how it is restarted with the new runlevel. AFAIK, initctl emits events only once, so "start on ..." won't be triggered and the new runlevel won't be entered. I know that I still misunderstanding something, and I am grateful for explanations. Thank you very much!

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3)

    - by Ankit G
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Delighted to announce the GA of EM Cloud Control Release 3 on all supported platforms. This release includes a new 12.1.0.3 version of platform (OMS & Agent), along with revised new versions of several Plug-ins and Metadata plug-ins (including a brand new Metadata plug-in for Oracle Virtual Networking). This release marks yet another major & significant milestone for Enterprise Manager 12c Cloud Control product releases. Following shows the list of new plug-ins versions available along the Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3). The new plug-ins have dependency on 12.1.0.3 platform, and customer needs to be on minimum 12.1.0.3 platform (OMS/Agent) version of the product before being able to deploy/use these plug-in versions. (In other words, the new plug-in versions cannot be deployed, unless Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3) is installed or upgraded to). Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3) release includes tons of new features, along with several stability and performance bug fixes and is available for download for all platforms from OTN:Installation/Upgrade paths: EM Customers can do a fresh installation using "Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3)", and will get the latest version of the platform, along with all the latest versions of plug-ins and Metadata plug-ins out of the box. EM Customers who are on Release1 (12.1.0.1+BP1) or Release 2 (12.1.0.2), or on older releases 11g and 10.2.0.5, can choose to use Oracle Enterprise Manager Cloud Control 12c Release 3 bits, to upgrade directly to the latest Release 3, and the plug-ins will be automatically upgraded to the latest versions. Enterprise Manager Certification Matrix is also now available on My Oracle Support - here.

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • Chalk Talk with John: How Does SOA Add Value to Your Enterprise?

    - by John Brunswick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In this episode of Chalk Talk with John we revisit our town of Middleware Fields from What Does User Experience Mean to You? to look at demystifying the business value of SOA. Middleware fields is an extremely eco-conscious community and has been trying to setup a commuting program for their employees. Though a good idea, they soon run into challenges ensuring that people are able to use the commuting services easily.  Take a look below to see how SOA is like a transit pass for your enterprise and how it addresses common issues you may have with your enterprise systems. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About me: Hi, I am John Brunswick, an Oracle Enterprise Architect. As an Oracle Enterprise Architect, I focus on the alignment of technical capabilities in support of business vision and objectives, as well as the overall business value of technology.  Before coming to Oracle, I was a Practice Manager within BEA System's Business Interaction Division consulting organization, orchestrating enterprise systems in support of line of business goals. Follow me on Twitter and visit my site for Oracle Fusion Middleware related tips.

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • Ask HTG: How Can I Check the Age of My Windows Installation?

    - by Jason Fitzpatrick
    Curious about when you installed Windows and how long you’ve been chugging along without a system refresh? Read on as we show you a simple way to see how long-in-the-tooth your Windows installation is. Dear How-To Geek, It feels like it has been forever since I installed Windows 7 and I’m starting to wonder if some of the performance issues I’m experiencing have something to do with how long ago it was installed. It isn’t crashing or anything horrible, mind you, it just feels slower than it used to and I’m wondering if I should reinstall it to wipe the slate clean. Is there a simple way to determine the original installation date of Windows on its host machine? Sincerely, Worried in Windows Although you only intended to ask one question, you actually asked two. Your direct question is an easy one to answer (how to check the Windows installation date). The indirect question is, however, a little trickier (if you need to reinstall Windows to get a performance boost). Let’s start off with the easy one: how to check your installation date. Windows includes a handy little application just for the purposes of pulling up system information like the installation date, among other things. Open the Start Menu and type cmd in the run box (or, alternatively, press WinKey+R to pull up the run dialog and enter the same command). At the command prompt, type systeminfo.exe Give the application a moment to run; it takes around 15-20 seconds to gather all the data. You’ll most likely need to scroll back up in the console window to find the section at the top that lists operating system stats. What you care about is Original Install Date: We’ve been running the machine we tested the command on since August 23 2009. For the curious, that’s one month and a day after the initial public release of Windows 7 (after we were done playing with early test releases and spent a month mucking around in the guts of Windows 7 to report on features and flaws, we ran a new clean installation and kept on trucking). Now, you might be asking yourself: Why haven’t they reinstalled Windows in all that time? Haven’t things slowed down? Haven’t they upgraded hardware? The truth of the matter is, in most cases there’s no need to completely wipe your computer and start from scratch to resolve issues with Windows and, if you don’t bog your system down with unnecessary and poorly written software, things keep humming along. In fact, we even migrated this machine from a traditional mechanical hard drive to a newer solid-state drive back in 2011. Even though we’ve tested piles of software since then, the machine is still rather clean because 99% of that testing happened in a virtual machine. That’s not just a trick for technology bloggers, either, virtualizing is a handy trick for anyone who wants to run a rock solid base OS and avoid the bog-down-and-then-refresh cycle that can plague a heavily used machine. So while it might be the case that you’ve been running Windows 7 for years and heavy software installation and use has bogged your system down to the point a refresh is in order, we’d strongly suggest reading over the following How-To Geek guides to see if you can’t wrangle the machine into shape without a total wipe (and, if you can’t, at least you’ll be in a better position to keep the refreshed machine light and zippy): HTG Explains: Do You Really Need to Regularly Reinstall Windows? PC Cleaning Apps are a Scam: Here’s Why (and How to Speed Up Your PC) The Best Tips for Speeding Up Your Windows PC Beginner Geek: How to Reinstall Windows on Your Computer Everything You Need to Know About Refreshing and Resetting Your Windows 8 PC Armed with a little knowledge, you too can keep a computer humming along until the next iteration of Windows comes along (and beyond) without the hassle of reinstalling Windows and all your apps.         

    Read the article

  • Atheros AR928X wireless connection makes neighbourhood machine drop off line

    - by funicorn
    I have an Acer laptop with Atheros AR928X wireless card installed, supported by ath9k driver in the linux kernel. There are other 5 computers sharing wireless connection via a TPLink 150Mbit/s wireless router. At first I found the network is a little bit slower than it's in Windows7, which I accepted as it should be. However a very strange thing is, each time I connected to the router and downloaded stuff for a while, one of the computers running Windows7 in my local network dropped off from the router. And if I run my laptop under Windows7, everything is fine. What's even stranger is although the network becomes slower, only the certain computer drops and totally freezes in connection with the router. I'm not willing to conclude it's due to the unhealthy connection from my laptop to the router, however we have confirmed this for more than one times and there is no problem with the network when I'm running WIndows7. I'm extremely confused about what's going on. As a Linux user running Ubuntu over 5 years, I am awared that wireless driver in Linux is badly notorious of lack of stability and slow speed. But is it so bad that the unhealthy wireless connection can do damage to another computer in the same local network? I do see a lot of "Tx excessive retries" in iwconfig output. But how exactly does this happen ? Thanks for your help. I guess I have to use this answer box to show the outputs $ sudo iwconfig wlan0 IEEE 802.11bgn ESSID:"TP-LINK111" Mode:Managed Frequency:2.427 GHz Access Point: E0:05:C5:E8:A9:92 Bit Rate=121.5 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=47/70 Signal level=-63 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:2 Invalid misc:23 Missed beacon:0 To show what's wrong with the wireless connection, I ran iwconfig again within 3 minutes, during which time I hardly did anything and the network was not much busy than being nearly idle $ sudo iwconfig wlan0 IEEE 802.11bgn ESSID:"TP-LINK111" Mode:Managed Frequency:2.427 GHz Access Point: E0:05:C5:E8:A9:92 Bit Rate=121.5 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=48/70 Signal level=-62 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:9 Invalid misc:28 Missed beacon:0 You can see Tx excessive retires and Invalid misc increase very quickly. $ sudo iwlist wlan0 modu wlan0 unknown modulation information. $ sudo iwlist wlan0 channel wlan0 13 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz Current Frequency:2.427 GHz (Channel 4)

    Read the article

  • Media server...serving files...............for a limited time only

    - by Craig
    I’m new to Ubuntu and am seeking help with a media server I have built. I have a couple of HTPCs in my house running XBMC. I wanted to build one for the family room working double duty as a HTPC, and media server to share movies, TV shows, music, etc. on my Windows network. So using some spare old parts I had lying around I decided to go CRAZY and build my first Linux box. I used Ubuntu because it seemed to be the most user friendly variant, especially for people that are new to Linux. I had to do a few things to get the media files shared properly on my network: Made sure my two media drives auto-mount every time I boot the computer by editing the “fstab” file – “sudo nano /etc/fstab” Installed Samba - “sudo apt-get install samba” Set a password for Samba - “sudo smbpasswd –a USERNAME” Edited the Samba configuration file to make sure the computer was in my networks workgroup – “sudo nano /etc/samba/smb.conf” In the file manager (not sure if that’s the right name for it), I right-clicked my media folders and set the sharing and permissions. The sharing was done without guest access, and permissions were set to; Owner, Group, and Others - Access: Create and Delete Files. Adjusted the Power Management settings to never put the system into sleep mode. I checked to see if I had access to the files from a Windows 7 machine and I did (Woo Hoo!). But when I tried to play any of my video files from the Windows machine (using VLC media player), they would only play for about 2-5 minutes and then they would stop with an error message saying that the file could not be accessed (Booo...). I tried playing some files through XBMC running in Windows and they worked for a bit longer (about 10-15mins), but they also stopped playing. I installed the Linux version of XBMC on the server and played the files locally with no problems. It doesn’t seem to be an issue with the files themselves, it seems to be a sharing problem on my network. So my question to the Ubuntu gurus out there is: Did I miss adding/editing something in the Samba configuration file? Did I use the right method to share my media files (file manager vs. using the terminal)? Is it possible for the computer to still go to sleep without the screen going black (does that even make sense?). Are there any special settings in Ubuntu that I should be using since this computer as a media server (is there a media server mode?...!...?). Any help on this matter would be greatly appreciated. Thanks

    Read the article

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >