Search Results

Search found 18794 results on 752 pages for 'graphics design'.

Page 369/752 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • dual display breaks unity

    - by dougleduck
    I find that most of the time, when I plug in my Dell 19" monitor with an HDMI lead, unity can't decide on the configuration; it will switch between different resolutions, and setups every 2-30 setusecond until I either ctrl-alt-backspace or unity crashes so I have to hard boot. Alternatively all the windows decorations and unity get killed. My setup is 11.10 64 bit,Dell Vostro 3550, AMD Radeon HD 6630M 1GB, Intel Core i5-2520M processor 2.50 GHz with integrated graphics card Currently I don't think the Radeon graphics card is working properly, but I'm not sure.

    Read the article

  • Why does Windows 7 overall performance is better than Ubuntu 11.10?

    - by user37805
    I have a i7 2600 processor, 8Gb DDR3 ram, nVidia GTX570, and Ubuntu takes 45-50 seconds to boot and 32-35 seconds to power off, while windows 7 boots in 20-25 seconds and shuts down in 10 seconds. Both OS with autologin enabled, and in dual boot. Ubuntu is slow with preload too, and doesn't show any boot splash after installing drivers and didn't recognize my nVidia graphics card on jockey GTK, I had to add x swat repository and that didn't worked. I installed proprietary drivers through terminal (nvidia-common, nvidia-settings) in order to have 3d acceleration. But it doesn't make any difference on the speed. I also have a Pentium 4 PC and ubuntu 11.10 is way faster than windows 7 or XP. Also with nvidia graphics card and preload. http://paste.ubuntu.com/924890/ there is my boot script, sorry but some words are in Spanish because my ubuntu is in Spanish. Not using WUBI, Ubuntu has its own partition, 64-bits, and Matlab 2011 has very low performance compared to windows version.

    Read the article

  • Developing a cloud based app

    - by user134897
    I am a company owner that has developed a cloud based app. My code writer has told me how good he is more than once, well, better stated, he did a good job telling me he was better than everyone else in my rather small community. In the last 18 months I have spent nearly 160,000.00 dollars trying to get this company to the "making money" stage. I am now nearly broke, sitting on the edge of a brilliant marketing plan to launch a much needed cloud based app. We did launch our app last year (late 2013), and the feedback was amazing from the users. One user that signed up to use the free app stated that we needed to call him the moment our company goes public because he wants to be the first to buy stock. Now, here's my problem. We did not originally set out to develop a freemium app, we just sort of ended up there by the natural progression of the app. So, now I have an app that really needs to be scrapped and re-built. Although I do feel my code writer has displayed some brilliance in what he has done, he was extremely weak on graphics and every time we speak he tells me there is a newer better way to code that he is trying to learn. So, here's the million dollar question. Ho do I find code writers that already know the newest, best ways to write code? Or maybe better asked, what is the newest best code writing technique? Second, is it even possible to find code writers that are good at graphics? In short, I am nearly broke and need to start over, but I do not know where to find people qualified to write it good the first time around and display good graphic skills. I am trying to build a team of writers instead of just one person. Maybe 3 good at code and two good at graphics, but I am clueless as to what criteria I should use to determine if I am building the right team members. Please help, I am sure you can tell I am fairly lost by my continued rambling.

    Read the article

  • What is the relationship between OpenGL, GLX, DRI, and Mesa3D?

    - by user65308
    I am starting out doing some low-level 3D programming in Linux. I have a lot of experience using the higher level graphics API OpenInventor. I know it is not strictly necessary to be aware of how all these things fit together but I'm just curious. I know OpenGL is just a standard for graphics applications. Mesa3D seems to be an open source implementation of this standard. So where do GLX and DRI fit? Digging around on Wikipedia and all these websites, I've yet to find an explanation of exactly how it all goes together. Where does hardware acceleration happen? What do proprietary drivers have to do with this? Thanks!

    Read the article

  • Desktop in Kubuntu lacks bars and quite everything after trying to apply Plymouth fix for proprietary drivers

    - by Michcioperz
    I'm running Kubuntu 13.10, my computer's graphics card is Nvidia GT 520. I tried to fix that Plymouth's problem with proprietary graphics driver using an old script I found on the internet (it was coded for Natty, now that I think about it I shouldn't have done it). I ran that script and rebooted my computer. First incorrect behaviour I noticed was that Plymouth only displayed its text splash only on 1/4th of the screen, though when I pressed Esc key the logs were displaying correctly. Main problem appeared when I logged in. All my desktop's windows lack menu bar, can't be closed with Alt-F4, and they don't appear on the task bar. I tried fixing it by reinstalling GRUB a few times, purging /etc/grub.d and /boot/grub, but only the text splash issue was fixed. How can I fix that?

    Read the article

  • Can you run OpenGL 2.0 on modern machines?

    - by thePalindrome
    I'm looking to get into 3d with OpenGL, using SDL to help with other stuff. I found plenty of good tutorials (lazyfoo is a big one), but "Learning Modern 3d Graphics Programming" uses a newer version of openGL that I'm not able to run! I opened up OpenGL extensions viewer, and I can only run OpenGL 2.0, whereas most tutorials are in higher versions. I ask this because I've heard that just about everything in 2.* got depreciated in the newer versions, so I'm worried that my code might not work. I'm looking at a few other tutorials, but I'm so used to SDL that those just confuse me... So should I bother trying to do graphics programming now, or should I just wait until I can upgrade my computer?

    Read the article

  • Android map performance with > 800 overlays of KML data

    - by span
    I have some a shape file which I have converted to a KML file that I wish to read coordinates from and then draw paths between the coordinates on a MapView. With the help of this great post: How to draw a path on a map using kml file? I have been able to read the the KML into an ArrayList of "Placemarks". This great blog post then showed how to take a list of GeoPoints and draw a path: http://djsolid.net/blog/android---draw-a-path-array-of-points-in-mapview The example in the above post only draws one path between some points however and since I have many more paths than that I am running into some performance problems. I'm currently adding a new RouteOverlay for each of the separate paths. This results in me having over 800 overlays when they have all been added. This has a performance hit and I would love some input on what I can do to improve it. Here are some options I have considered: Try to add all the points to a List which then can be passed into a class that will extend Overlay. In that new class perhaps it would be possible to add and draw the paths in a single Overlay layer? I'm not sure on how to implement this though since the paths are not always intersecting and they have different start and end points. At the moment I'm adding each path which has several points to it's own list and then I add that to an Overlay. That results in over 700 overlays... Simplify the KML or SHP. Instead of having over 700 different paths, perhaps there is someway to merge them into perhaps 100 paths or less? Since alot of paths are intersected at some point it should be possible to modify the original SHP file so that it merges all intersections. Since I have never worked with these kinds of files before I have not been able to find a way to do this in GQIS. If someone knows how to do this I would love for some input on that. Here is a link to the group of shape files if you are interested: http://danielkvist.net/cprg_bef_cbana_polyline.shp http://danielkvist.net/cprg_bef_cbana_polyline.shx http://danielkvist.net/cprg_bef_cbana_polyline.dbf http://danielkvist.net/cprg_bef_cbana_polyline.prj Anyway, here is the code I'm using to add the Overlays. Many thanks in advance. RoutePathOverlay.java package net.danielkvist; import java.util.List; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Point; import android.graphics.RectF; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import com.google.android.maps.Projection; public class RoutePathOverlay extends Overlay { private int _pathColor; private final List<GeoPoint> _points; private boolean _drawStartEnd; public RoutePathOverlay(List<GeoPoint> points) { this(points, Color.RED, false); } public RoutePathOverlay(List<GeoPoint> points, int pathColor, boolean drawStartEnd) { _points = points; _pathColor = pathColor; _drawStartEnd = drawStartEnd; } private void drawOval(Canvas canvas, Paint paint, Point point) { Paint ovalPaint = new Paint(paint); ovalPaint.setStyle(Paint.Style.FILL_AND_STROKE); ovalPaint.setStrokeWidth(2); int _radius = 6; RectF oval = new RectF(point.x - _radius, point.y - _radius, point.x + _radius, point.y + _radius); canvas.drawOval(oval, ovalPaint); } public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Projection projection = mapView.getProjection(); if (shadow == false && _points != null) { Point startPoint = null, endPoint = null; Path path = new Path(); // We are creating the path for (int i = 0; i < _points.size(); i++) { GeoPoint gPointA = _points.get(i); Point pointA = new Point(); projection.toPixels(gPointA, pointA); if (i == 0) { // This is the start point startPoint = pointA; path.moveTo(pointA.x, pointA.y); } else { if (i == _points.size() - 1)// This is the end point endPoint = pointA; path.lineTo(pointA.x, pointA.y); } } Paint paint = new Paint(); paint.setAntiAlias(true); paint.setColor(_pathColor); paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(3); paint.setAlpha(90); if (getDrawStartEnd()) { if (startPoint != null) { drawOval(canvas, paint, startPoint); } if (endPoint != null) { drawOval(canvas, paint, endPoint); } } if (!path.isEmpty()) canvas.drawPath(path, paint); } return super.draw(canvas, mapView, shadow, when); } public boolean getDrawStartEnd() { return _drawStartEnd; } public void setDrawStartEnd(boolean markStartEnd) { _drawStartEnd = markStartEnd; } } MyMapActivity package net.danielkvist; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapView; public class MyMapActivity extends MapActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); String url = "http://danielkvist.net/cprg_bef_cbana_polyline_simp1600.kml"; NavigationDataSet set = MapService.getNavigationDataSet(url); drawPath(set, Color.parseColor("#6C8715"), mapView); } /** * Does the actual drawing of the route, based on the geo points provided in * the nav set * * @param navSet * Navigation set bean that holds the route information, incl. * geo pos * @param color * Color in which to draw the lines * @param mMapView01 * Map view to draw onto */ public void drawPath(NavigationDataSet navSet, int color, MapView mMapView01) { ArrayList<GeoPoint> geoPoints = new ArrayList<GeoPoint>(); Collection overlaysToAddAgain = new ArrayList(); for (Iterator iter = mMapView01.getOverlays().iterator(); iter.hasNext();) { Object o = iter.next(); Log.d(BikeApp.APP, "overlay type: " + o.getClass().getName()); if (!RouteOverlay.class.getName().equals(o.getClass().getName())) { overlaysToAddAgain.add(o); } } mMapView01.getOverlays().clear(); mMapView01.getOverlays().addAll(overlaysToAddAgain); int totalNumberOfOverlaysAdded = 0; for(Placemark placemark : navSet.getPlacemarks()) { String path = placemark.getCoordinates(); if (path != null && path.trim().length() > 0) { String[] pairs = path.trim().split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude // lngLat[1]=latitude // lngLat[2]=height try { if(lngLat.length > 1 && !lngLat[0].equals("") && !lngLat[1].equals("")) { GeoPoint startGP = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); GeoPoint gp1; GeoPoint gp2 = startGP; geoPoints = new ArrayList<GeoPoint>(); geoPoints.add(startGP); for (int i = 1; i < pairs.length; i++) { lngLat = pairs[i].split(","); gp1 = gp2; if (lngLat.length >= 2 && gp1.getLatitudeE6() > 0 && gp1.getLongitudeE6() > 0 && gp2.getLatitudeE6() > 0 && gp2.getLongitudeE6() > 0) { // for GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); if (gp2.getLatitudeE6() != 22200000) { geoPoints.add(gp2); } } } totalNumberOfOverlaysAdded++; mMapView01.getOverlays().add(new RoutePathOverlay(geoPoints)); } } catch (NumberFormatException e) { Log.e(BikeApp.APP, "Cannot draw route.", e); } } } Log.d(BikeApp.APP, "Total overlays: " + totalNumberOfOverlaysAdded); mMapView01.setEnabled(true); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } } Edit: There are of course some more files I'm using but that I have not posted. You can download the complete Eclipse project here: http://danielkvist.net/se.zip

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • InSync12 and Australia Visits: UX is Global, Regional, Everywhere!

    - by ultan o'broin
    I attended the Australian Oracle User Group (AUSOUG) and Quest International User Group's InSync12 event in Melbourne, Australia: the user group conference for Oracle products in the ANZ region. I demoed Oracle Fusion Applications and then presented how Oracle crafted the world class Fusion Apps user experience (UX). I explained about the Oracle user experience design pattern strategy of uptake for all apps, not just Fusion, and what our UX pattern externalization strategy means for customers, partners, and ADF developers. A great conference, lots of energy, the InSync12 highlights for me were Oracle's Senior Vice President Cliff Godwin’s fast-moving Oracle E-Business Suite (EBS) roadshow with the killer Oracle Endeca user experience uptake, and Oracle ADF product outreachmeister Chris Muir’s (@chriscmuir) session on Oracle ADF Mobile solution and his hands-on mobile app development showing how existing ADF/JDev skills can build a secure, code once-deploy-to-many-device hybrid app solution in minutes. Cliff Godwin shows off the Oracle Endeca integration with Oracle E-Business Suite. Chris Muir talked the talk and then walked the walked with Oracle ADF Mobile. Applications UX was mixing it up with the crowd at InSync12 too, showing off cool mobile UX solutions, gathering data for future innovations, and engaging with EBS, JD Edwards, and PeopleSoft apps customers and partners. User conferences such as InSync12 are an important part of our Oracle Applications UX user-centered design process, giving real apps users the opportunity to make real inputs and a way for us to watch and to listen to their needs and wants and get views on current and emerging UX too. Eric Stilan (@icondaddy) of Applications UX uses an iPad to gather feedback on the latest UX designs from conference attendees. While in Melbourne, I also visited impressive Oracle partner, Callista for a major ADF and UX pow-wow, and was the er, star of a very proactive event hosted by another partner Park Lane Information Technology (coordinated by Bambi Price (@bambiprice) of ODTUG) where I explained what UX is about, and how partner and customers can engage, participate and deploy that Applications UX scientific insight to advantage for their entire business. I also paired up with Oracle Australia in Sydney to visit key customers while there, and back at Oracle in Melbourne I spoke with sales consultants and account managers about regional opportunities and UX strategy, and came away with an understanding of what makes the Oracle market tick in Australia. Mobile worker solution development and user experience is hot news in Australia, and this was a great opportunity to team up with Chris Muir and show how the alignment of the twin stars of UX design patterns and ADF technology enables developers to make great-looking, usable apps that really sparkle. Our UX design patterns--or functional (UI) patterns, to use the developer world language--means that developers now have not only a great tool set to build apps on Oracle ADF/FMW but proven, tested usability solutions to solve common problems they can apply in the IDE too. In all, a whirlwind UX visit, packed with events and delivery opportunities, and all too short a time in the wonderful city of Melbourne. I need to get back there soon! For those who need a reminder, there's a website explaining how to get involved with, and participate in, Applications User Experience (including the Oracle Usability Advisory Board) events and programs. Thank you to AUSOUG, Quest, InSync, Callista, Park Lane IT, everyone at Oracle Australia, Chris Muir, and all the other people who came together to make this a productive visit. Stay tuned for more UX developments and engagements in the region on the Oracle VoX blog and Usable Apps website too!

    Read the article

  • Countdown of Top 10 Reasons to Never Ever Use a Pie Chart

    - by Tony Wolfram
      Pie charts are evil. They represent much of what is wrong with the poor design of many websites and software applications. They're also innefective, misleading, and innacurate. Using a pie chart as your graph of choice to visually display important statistics and information demonstrates either a lack of knowledge, laziness, or poor design skills. Figure 1: A floating, tilted, 3D pie chart with shadow trying (poorly)to show usage statistics within a graphics application.   Of course, pie charts in and of themselves are not evil. This blog is really about designers making poor decisions for all the wrong reasons. In order for a pie chart to appear on a web page, somebody chose it over the other alternatives, and probably thought they were doing the right thing. They weren't. Using a pie chart is almost always a bad design decision. Figure 2: Pie Chart from an Oracle Reports User Guide   A pie chart does not do the job of effectively displaying information in an elegant visual form.  Being circular, they use up too much space while not allowing their labels to line up. Bar charts, line charts, and tables do a much better job. Expert designers, statisticians, and business analysts have documented their many failings, and strongly urge software and report designers not to use them. It's obvious to them that the pie chart has too many inherent defects to ever be used effectively. Figure 3: Demonstration of how comparing data between multiple pie charts is difficult.   Yet pie charts are still used frequently in today's software applications, financial reports, and websites, often on the opening page as a symbol of how the data inside is represented. In an attempt to get a flashy colorful graphic to break up boring text, designers will often settle for a pie chart that looks like pac man, a colored spinning wheel, or a 3D floating alien space ship.     Figure 4: Best use of a pie chart I've found yet.   Why is the pie chart so popular? Through its constant use and iconic representation as the classic chart, the idea persists that it must be a good choice, since everyone else is still using it. Like a virus or an urban legend, no amount of vaccine or debunking will slow down the use of pie charts, which seem to be resistant to logic and common sense. Even the new iPad from Apple showcases the pie chart as one of its options.     Figure 5: Screen shot of new iPad showcasing pie charts. Regardless of the futility in trying to rid the planet of this often used poor design choice, I now present to you my top 10 reasons why you should never, ever user a pie chart again.    Number 10 - Pie Charts Just Don't Work When Comparing Data Number 9 - You Have A Better Option: The Sorted Horizontal Bar Chart Number 8 - The Pie Chart is Always Round Number 7 - Some Genius Will Make It 3D Number 6 - Legends and Labels are Hard to Align and Read Number 5 - Nobody Has Ever Made a Critical Decision Using a Pie Chart Number 4 - It Doesn't Scale Well to More Than 2 Items Number 3 - A Pie Chart Causes Distortions and Errors Number 2 - Everyone Else Uses Them: Debunking the "Urban Legend" of Pie Charts Number 1 - Pie Charts Make You Look Stupid and Lazy  

    Read the article

  • Where can you find the Oracle Applications User Experience team in the next several months?

    - by mvaughan
    By Misha Vaughan, Applications User ExperienceNovember is one of my favorite times of year at Oracle. The blast of OpenWorld work is over, and it’s time to get down to business and start taking our messages and our work on the road out to the user groups. We’re in the middle of planning all of that right now, so we decided to provide a snapshot of where you can see us and hear about the Oracle Applications User Experience – whether it’s Fusion Applications, PeopleSoft, or what we’re planning for the next-generation of Oracle Applications.On the road with Apps UX...In December, you can find us at UKOUG 2012 in Birmingham, UK: UKOUG, UK Oracle User Group Conference 2012?December 3 – 5, 2012?ICC, Birmingham, UKIn March, we will be at Alliance 2013 in Indianapolis, and our fingers are crossed for OBUG Connect 2013 in Antwerp:? Alliance 2013March 17 - 20, 2013 ?Indianapolis, IndianaOBUG Benelux Connect 2013?March 26, 2013?Antwerp, Belgium?? In April, you will see us at COLLABORATE13 in Denver:? Collaborate13April 7 - April 11, 2013 ?Denver, Colorado?? And in June, we round out the kick-off to summer at OHUG 2013 in Dallas and Kscope13 in New Orleans:? OHUG 2013June 9 -13, 2013?Dallas, Texas ODTUG Kscope13?June 23-27, 2013 ?New Orleans, LA? The Labs & DemosAs always, a hallmark of our team is our mobile usability labs. If you haven’t seen them, they are a great way for customers and partners to get a peek at what Oracle is working on next, and a chance for you to provide your candid perspective. Based on the interest and enthusiasm from customers last year at Collaborate, we are adding more demo-stations to our user group presence in the year ahead. If you want to see some of the work we are doing first-hand but don’t have a lot of time, the demo stations are a great way to get a quick update on the latest wow-factor we are researching. I can promise that you will see whatever we think is new and interesting at the demo stations first. Oracle OpenWorld 2012 Apps UX DemostationFor Applications DevelopersMore and more, I get asked the question, “How do I build an application that looks like a Fusion?” My answer is Fusion Applications Design Patterns. You can find out more about how Fusion Applications developers can leverage ADF and the user experience best practices we developed for Fusion at sessions lead by Ultan O’Broin, Director of Global User Experience, in the year ahead. Ultan O'Broin, On Fusion Design Patterns Building mobile applications are also top of mind these days. If you want to understand how Oracle is approaching this strategy, check out our session on Mobile user experience design patterns with Mobile ADF.  In many cases, this will be presented by Lynn Rampoldi-Hnilo, Senior Manager of Mobile User Experiences, and in a few cases our ever-ready traveler Ultan O’Broin will be on deck. Lynn Rampoldi-Hnilo, on Mobile User Experience Design PatternsApplications User ExperiencesFusion Applications continues to evolve, and you will see the new face of Fusion Applications at our executive sessions in the year ahead, which are led by vice president Jeremy Ashley or a hand-picked presenter, such as one of our Fusion User Experience Advocates.  Edward Roske, CEO InterRel Consulting & Fusion User Experience AdvocateAs always, our strategy is to take our lessons learned and spread them across the Applications product lines. A great example is the enhancements coming in the PeopleSoft user experience, which you can hear about from Harris Kravatz, Senior Manager, PeopleSoft User Experience. Fusion Applications ExtensibilityWe can’t talk about Fusion Applications without talking about how to make it look like your business. If tailoring Fusion applications is a question in your mind, and it should be, you should hit one of these sessions. These sessions will be lead by our own Killian Evers, Senior Director, Tim Dubois, User Experience Architect, and some well-trained Fusion User Experience Advocates.Find out moreIf you want to stay on top of where and when we will be, you can always sign up for our newsletter or check out the events page of usableapps.

    Read the article

  • How Do You Actually Model Data?

    Since the 1970’s Developers, Analysts and DBAs have been able to represent concepts and relations in the form of data through the use of generic symbols.  But what is data modeling?  The first time I actually heard this term I could not understand why anyone would want to display a computer on a fashion show runway. Hey, what do you expect? At that time I was a freshman in community college, and obviously this was a long time ago.  I have since had the chance to learn what data modeling truly is through using it. Data modeling is a process of breaking down information and/or requirements in to common categories called objects. Once objects start being defined then relationships start to form based on dependencies found amongst other existing objects.  Currently, there are several tools on the market that help data designer actually map out objects and their relationships through the use of symbols and lines.  These diagrams allow for designs to be review from several perspectives so that designers can ensure that they have the optimal data design for their project and that the design is flexible enough to allow for potential changes and/or extension in the future. Additionally these basic models can always be further refined to show different levels of details depending on the target audience through the use of three different types of models. Conceptual Data Model(CDM)Conceptual Data Models include all key entities and relationships giving a viewer a high level understanding of attributes. Conceptual data model are created by gathering and analyzing information from various sources pertaining to a project during the typical planning phase of a project. Logical Data Model (LDM)Logical Data Models are conceptual data models that have been expanded to include implementation details pertaining to the data that it will store. Additionally, this model typically represents an origination’s business requirements and business rules by defining various attribute data types and relationships regarding each entity. This additional information can be directly translated to the Physical Data Model which reduces the actual time need to implement it. Physical Data Model(PDMs)Physical Data Model are transformed Logical Data Models that include the necessary tables, columns, relationships, database properties for the creation of a database. This model also allows for considerations regarding performance, indexing and denormalization that are applied through database rules, data integrity. Further expanding on why we actually use models in modern application/database development can be seen in the benefits that data modeling provides for data modelers and projects themselves, Benefits of Data Modeling according to Applied Information Science Abstraction that allows data designers remove concepts and ideas form hard facts in the form of data. This gives the data designers the ability to express general concepts and/or ideas in a generic form through the use of symbols to represent data items and the relationships between the items. Transparency through the use of data models allows complex ideas to be translated in to simple symbols so that the concept can be understood by all viewpoints and limits the amount of confusion and misunderstanding. Effectiveness in regards to tuning a model for acceptable performance while maintaining affordable operational costs. In addition it allows systems to be built on a solid foundation in terms of data. I shudder at the thought of a world without data modeling, think about it? Data is everywhere in our lives. Data modeling allows for optimizing a design for performance and the reduction of duplication. If one was to design a database without data modeling then I would think that the first things to get impacted would be database performance due to poorly designed database and there would be greater chances of unnecessary data duplication that would also play in to the excessive query times because unneeded records would need to be processed. You could say that a data designer designing a database is like a box of chocolates. You will never know what kind of database you will get until after it is built.

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Reducing Deadlocks - not a DBA issue ?

    - by steveh99999
     As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....A couple of 'war stories' on deadlocks which I think are interesting :- Case One , I had an issue recently on a third-party application that I support on SQL 2008.  This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database.  However, we are not allowed to alter application code or modify table structure..This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application. This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.So, less locks, held for less time = less possibility of deadlocks. I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.   The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • Notes - Part II - Play with JavaFX

    - by Silviu Turuga
    Open the project from last lesson Double click on NotesUI.fmxl, this will open the JavaFX Scene Builder On the left side you have a area called Hierarchy, from there press Del or Shift+Backspace on Mac to delete the Button and the Label. You'll receive a warning, that some components have been assigned an fx:id, click Delete as we don't need them anymore. Resize the AnchorPane to have enough room for our design, eg. 820x550px From the top left pick the Container called Accordion and drag over the AnchorPane design Chose then from Controls a List View and drag inside the Accordion. You'll notice that by default the Accordion has 2 TitledPane, and you can switch between them by clicking on their name. I'll let you the pleasure to do the rest in order to get the following result  Here is the list of objects used Save it and then return to NetBeans Run the application and it should be run without any issue. If you click on buttons they all are functional, but nothing happens as we didn't link them with any action. We'll see this in the next episode. Now, let's play a little bit with the application and try to resize it… Have you notice the behavior? If the form is too small, some objects aren't visible, if it is too large there is too much space . That's for sure something that your users won't like and you as a programmer have to care about this. From NetBeans double click NotesUI.fmxl so to return back to JavaFX Scene Builder Select the TextField from bottom left of Notes, the one where I put the text Category and then from the right part of JavaFX Scene Builder you'll notice a panel called Inspector. Chose Layout and then click on the dotted lines from left and bottom of the square, like you see in the below image This will make the textfield to have always the same distance from left and bottom no matter the size of the form. Save and run the application. Note that whenever the form is changing the Height, the Category TextField has the same distance from the bottom. Select Accordion and do the same steps but also check the top dotted line, because we want the Accordion to have the same height as the main form has. I'll let you the pleasure to do the same for the rest of components. It's very important to design an application that can be resize by user and in the same time, all the buttons are on place. Last step is to make sure our application is not getting smaller then a certain size, as this will hide parts of our layout. So select the AnchorPane and from Inspector go to Layout and note down the Width and Height. Go back to NetBeans and open the file Main.java and add the following code just after stage.setScene(scene); (around line 26) stage.setMinWidth(820); stage.setMinHeight(550); Use your own width and height. This will prevent user to reduce the width or height of your application to a value that will hide parts of your layout. So now you should have done most of the design part and next time we'll see how can we enter some data into our newly created application… Note: in case you miss something, here are the source files of the project till this point. 

    Read the article

  • Video and LAN Cards are not recognized on Intel DG41RQ MB

    - by artyom
    Hello, My 32 bit Debian Etch with Etch-n-Half kernel 2.6.24 and Etch-n-half video driver xserver-xorg-video-intel 2.2.1-1~etchnhalf2 does not recognize my on-board LAN and Video. I can work only with VESA display. And additional PCI LAN card (that I wanted to use for Cross-X-Cabel). output of lspci 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 4 Series Chipset PCI Express Root Port (rev 03) 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10 According to MB Spec: Video: Intel G41 Express Chipset Graphics and Memory Controller Hub Graphics (GMCH) LAN: Realtek 8111D Gigabit Ethernet Controller LAN Support When I run dpkg-reconfigure xserver-xorg is suggest i810 driver but X does not start, it does not start with intel driver as well; only vesa. LAN I even can't find in lspci... the last Realtek card is external one 100MB card. Where to start from?

    Read the article

  • Chipset fan on the frtiz - compressed air hasn't fixed anything - is there anything I can do?

    - by Anthony
    Yesterday, my computer started to make an annoying whining noise. Knowing that this is likely a fan issue, I opened the case and proceeded to determine which fan was causing the issue. I got some compressed air and tried cleaning out the dust around it (and the rest of the computer while I was at it). This hasn't seemed to fix the issue. Now, if it were just any fan, I would probably just replace the fan - they're relatively cheap after all. However, this is a special fan. Aside: For what its worth, I feel bad that the graphics card blocks part of the fan, but it is the only slot the graphics card fits, so I had no choice. After pulling out my motherboard user guide, it looks like this is a fan placed directly on top of the chipset. To be perfectly honest, I have no clue what the purpose of the chipset is - but it sounds important. After some quick research, I see that it is responsible for providing the bridge between my CPU, RAM and graphics, among other things. Just a quick search at Newegg tells me that chipset fans can be purchased at pretty reasonable prices (< 20 dollars). Is it practical to replace this fan? It is an old computer as computers go and I wouldn't be terribly upset to upgrade the motherboard and processor, so perhaps this is a sign. Hardware Specs: Motherboard: Asus A8N-E Chipset: NVIDIA nForce4 Ultra

    Read the article

  • PC powers off at random times

    - by Timo Huovinen
    Short Version After experiencing some problems with Mobo batteries my PC started to power off at random times, the power off is instant and sudden and does not restart afterwards, need help figuring out the cause. Facts: Powers off when PC is playing games Powers off when PC is idle Powers off when PC is in safe mode Powers off when PC is in BIOS Powers off when PC is booted through a Windows installation USB Replaced the motherboard battery several times Replaced the 650W PSU with a 750W PSU Replaced the RAM Swapped the RAM between slots Re-applied thermal paste to the CPU Checked if the motherboard touches the case Nothing is overclocked PC Specs PC specs: OS: Windows 7 Ultimate SP1 RAM: klingston 1333MHz 4GB stick CPU: AMD Phenom II x4 955 Mobo: Gigabyte 88GMA-UD2H rev 2.2 Motherboard battery: CR2032 3v HDD: 500GB Seagate ST3500418AS ATA Device Graphics: ATI/AMD Radeon HD 6870 Very Long version Around 10 months ago I built a brand new gaming PC. Around 6 months ago it's time setting in windows started resetting to the year 2010. I swapped the Motherboard battery for a new one of the exact same size and shape and voltage, and the problems disappeared...for around 2 weeks. Then the same problem happened again, time gets reset, I swapped the battery again, and the problem was gone for good and everything was great for about 3 months.. then another problem started happening, the PC started to power off suddenly and without warning at completely random times, sometimes the PC works for and hour, sometimes 5 minutes. So I read on the forums that it might be either the PSU or the motherboard Battery or RAM or HDD or the Graphics card or the CPU or the motherboard or the drivers or a Virus or Grounding issues, or something short circuiting, basically it can be anything... I spent some days researching, and decided to remove the possibility of a virus. I reset the CMOS, cleared all BIOS settings and reinstalled windows 7 after a full format of the HDD, but the random power off kept happening. I then disabled the restart on error option in windows and looked at the event log for error events, but they did not help me figure out the problem. Network list service depends on network location awareness the dependency group failed to start Source Kernel Power Event 41 Task Category 63 Source Disk Event ID 11 Task Category None The driver detected a controller error on device disk I took apart the PC, every little piece, re-applied some expensive thermal paste to the CPU, and double checked that none of the pieces are touching the PC case. The problem was gone, the PC no longer powered off randomly I re-attached the graphics card and all was good for 4 months... then the power off problem appeared again, but was happening at high intervals, the PC would shutdown once in 2 days on average, at random points in time, sometimes when it's idle all day long, sometimes when it's running CRYSIS 2. I checked the CPU temperature, because I know that AMD CPU's have a built in protection mechanism that switches off the PC if the CPU gets too hot, and the Temp was 50C system temp, and 45C CPU after running the PC all day long (I did not do tests to see if there are any temperature spikes, don't know how to do them) Originally the PSU that powered the PC was 650Watts and had one 4 pin cable to power the CPU, I replaced it with a new 750Watts PSU which has two 4 pin cables for the CPU, but the problem remained. I removed the graphics card and let the motherboard use the built in one, but the PC kept suddenly powering off at random times. I took apart the PC completely again, and re-applied thermal paste to the CPU, added lots of insulation, and checked for any type of short-circuit possibility again and again, but the problem remained. The problem was like that for some months. I replaced the Battery a couple of times over the time, changed lots of options in windows, and tried everything I could, but it kept powering off, so I stopped using the PC as much as I used to, just living with the random power offs from time to time, until a couple of days ago, when the power off happens almost immediately after powering on the PC. I replaced the RAM with a brand new one, but that did not help. Took apart the PC again, checked for anything anywhere that might cause it, found some small scratches on the very edge of the motherboard to the left of the PCI express x16 slot. This might cause the problem, I thought, but the scratch looks very superficial, not deep at all, and if the scratch did harm the motherboard, wouldn't it cause it to not start at all? And why did it start to power off a while ago, and then suddenly stop powering off? The scratches could not have vanished??? did chkdsk \d but it powered off when it was at 75% I removed the hard disks, the graphics card, while I fiddled with the BIOS settings, and suddenly the PC shut down while I was looking at the BIOS version. This makes me realize, it is not caused by: HDD, Windows, Drivers or the Graphics card I cleared the CMOS again, updated the BIOS from F5 to F6f beta, but that did not help, it might even seem that the PC powers off even sooner. The shutdown even happened to me while I booted through a windows 7 installation USB and was in the repair console. I removed one of the cables powering the CPU, now only one 4pin cable powers it, and it worked for 30mins after doing that, which makes me think that it's the CPU overheating, and because it gets less power, it overheats slower? The things that I am still considering: CPU overheating (does not seem to overheat, maybe false readings?) Motherboard short circuiting (faulty motherboard?) I desperately need some advice in what is faulty, is it a faulty Motherboard or an overheating CPU? or maybe something else? I have been breaking my head over this problem over a span of 6 months. I'm not sure if this is a good place to ask this question, if it is not, then tell me where I can get some experienced help. More info I have also discovered a mysterious piece that seems to have fallen out of the motherboard i119.photobucket.com/albums/o126/yurikolovsky/strangepiece.jpg What is it? Looks like each time that it powers off the datetime gets reset I also found another forum post tomshardware.co.uk/forum/… except I don't have Integrated PeripheralsUSB Keyboard Function option in BIOS :S Comments summary (asked by Random moderator) Q. tell me, if the computer restarts, is it immediately? Does it take a second and then restarts? Do you see (BSOD) or hear (PSU, short circuit) any suspicious when it happens? After reading trough it, it remains the mainboard that is faulty. – JohannesM A. Immediate power off, all the fans stop instantly, all the light turn off instantly, no sound or anything, and it remains off until I turn it back on. Thanks for the feedback, faulty motherboard is what I fear. Q. Try stress-testing the system with Prime95 and see if errors or shutdowns occur when the CPU is under full load. – speakr A. Prime95 heat stress test peaked CPU heat at 60C after 5mins, it powered off after 30mins of testing in the middle of the test with no errors, Prime95 Heat test or the stress-testing with low RAM usage (small or in-place FFTs) do not report errors while testing for 10-60 mins. The power off does not seem like it is affected by Prime95 at all Makes me wonder if it's a CPU or Motherboard issue at all. Q. I had similar random/intermittent problems with my old board. It gave one of a few different symptoms: keyboard and/or mouse would die and/or the RAM wouldn't work and/or it would shut down. It was in bad shape. One problems was that my old PSU had literally burned the connector on it (browned around the pins), another was that a broken lead inside the layers of the PCB would work sometimes if it happened to be hot or if I bent the board—by jamming a hunk of wood behind it. I managed to keep the board alive for several years, but eventually nothing I did would make it work correctly anymore. – Synetech A. I will try that as the last resort, ok? ;) Q. Have you tried a different power cord, surge protector, outlet (on a different circuit). It's worth a shot just to ensure it's not subpar wiring or a week circuit (dips in power may cause shutdown if the PSU can't pull enough juice from the wall). – Kyle A. yes, I attached the PC to an entirely different outlet on a different circuit and the problem persists. After connecting it to a different outlet after starting the PC it gave me 3 long beeps and 1 short one, then the PC immediately proceeded to boot up normally. Q. Re-check your mainboard manual and all PSU connections to your mainboard to be sure that nothing is missing (e.g. 12V ATX 4-pin/6-pin connector). If you can provoke shutdowns with Prime95, then consider buying new hardware -- a stable system should run Prime95 for 24h without any errors. Prime95 mentions errors in the log when they occur and gives a summary after the stress test was stopped manually (e.g. "0 errors, 0 warnings", if all is fine) – speakr A. Re-checked, there are no more PSU connectors that I can physically connect, except the one ATX 4-pin (there are 2 that power the CPU) that I disconnected on purpose, I have reconnected it but the problem persists. Q. With one PC I had a short curcuit. The power button on the front plate had its cables soldered, but not isolated, and the contacts were very close to the metal case. A heavier touch was enough to cause a shutdown. The PC's vibration could be enough – ott-- A. yes, it seems to switch off with even the lightest touch, I switched on the PC, then pulled out the front panel power cable that connects to the motherboard so the power button does not work anymore, after 5 mins of working like that, with the power button completely disconnected, just sitting idle, the PC powered off again, I don't think it's the power button. Q. I wonder if you dare to operate components without the case, that is remove motherboard, power, disk ( just put the motherboard on a wooden desk). Don't bend the adapters when running like that. – ott-- A. yes, I do dare to do that, but only tomorrow, too tired/late right now.

    Read the article

  • NVIDIA GeForce GT750M drivers for Ubuntu 13.10 on iMac

    - by Eugene B
    I have just got a new iMac 14.3 with 21.5' display. It has Intel i7 processor and NVIDIA GeForce GT750M graphics card with GPU (Device ID: 0x00fe9). I have installed Ubuntu 13.10 on it (dual boot with refit). Everything works fine apart from graphics. It is clear that graphics' not accelerated, and there are no drivers suggested in the "Additional drivers" section. I have tried to Install nvidia drivers from the repository. Download nvidia drivers and install them manually. Follow instructions here. Follow instructions here. Follow instructions here. Install bumblebee. Play around with xorg.conf file. Blacklist nouveau. As you may guess, nothing worked. The weirdest thing is, that when I boot from a LiveCD, it picks up screen resolution and all the rest correctly, though when I install the system to my Hard Drive it clearly does not have proper drivers. Does anyone have any suggestion what to do?

    Read the article

  • Diagnosing PCI issues

    - by dtsazza
    I'm upgrading a PC for a friend, and have run into a problem with upgrading the motherboard. I've been assembling custom PCs for the best part of a decade now, so I'm happy enough with the basics at the very least. The motherboard, CPU and graphics card were all updated at once - after this was done, the machine POSTs but the PCI wireless card, as well as the PCI-E graphics card, do not seem to be recognised at all by the system. No trace of them anywhere in the BIOS, or the POST output, or in Windows. I booted into Linux and ran an lspci which also showed up no sign of them. What is the best step to go about diagnosing this? Is it likely/feasible that the motherboard's PCI bus is just defective and it needs to be RMAed? Are there any other common gotchas that might cause these symptoms? For reference, the components in question are: CPU: Celeron E1400 Motherboard: Gigabyte GA-G31M-ES2L Graphics card: TBC (a low end card from a couple of years ago; worked flawlessly before the mobo change) PCI WNIC: Edimax 7128G Thanks in advance for any help.

    Read the article

  • Troubleshooting: Monitor never turns on, system fans running, DVD-ROM does not open.

    - by Wesley
    Hi all, Here are my specs beforehand: ECS P4VXASD2+ (V5.0) motherboard FSB 533MHz Intel Pentium 4 2.40A GHz Prescott Socket 478 2x 256MB PC2100 DDR RAM, 2x 256MB PC133 SDRAM CoolMax 350W PSU DVD-ROM - will edit with brand & model 128MB ATi Radeon 9800 Pro AGP No hard drive So, I just put those parts together today and I tried to power it up, with the monitor connected to the Radeon 9800 in the AGP slot (mobo does not have VGA port). After turning it on, the CPU fan, graphics fan and system fan go on. However, the monitor remains in standby mode, despite being plugged in. Also, after pushing the button on the DVD-ROM drive, it does not open. I've used the DVD-ROM drive before with absolutely no issues. The graphics card was slightly buggy when I put it on another machine, which was left outside in winter weather for 3 months. (Still that computer's integrated graphics worked fine.) CMOS battery was replaced and jumpers are all set correctly. Now, I'm wondering whether the motherboard, CPU, PSU or GPU is the problem. What can I do to test which part is the problem? Just to clarify, I don't have a hard drive, so I usually boot Ubuntu from the disc drive. Anyways, thanks in advance!

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >