Search Results

Search found 26808 results on 1073 pages for 'storing information'.

Page 195/1073 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • SEO for maps-based websites that require user interaction

    - by j0nes
    I have a website that basically shows a lot of locations worldwide on a Google Maps like interface. The map itself is built using the Leaflet library and Open Street Map tiles. In the map, I show markers at each location I have. There is a popup window when I click on a marker that shows additional information and contains links to "detail" pages for this location. I fetch the location data for the viewpoint from an AJAX call from my server, so the additional information is not available in the HTML page itself. The detail pages are the pages my users are interested in. My normal users load the map, search the location they are interested in, click on a marker and click on a link in the popup window. However for search engines, this might look different. As this navigation pattern relies on user interaction, I think they might not be able to find the details page. My questions: Are search engines able to follow a navigation path like outlined above? How can I improve the navigation for search engines? (For example showing textual links below the map, sitemaps...) How important are internal links for SEO?

    Read the article

  • Can't install Skype on Ubuntu 12.04 64 bit

    - by Gabriel Alvim
    I've tried many different ways: I downloaded the file on the Skype website, which returned me this error "Cannot install ia-32-libs" I followed this instructions https://help.ubuntu.com/community/Skype and here is what I got: **W: Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/partner/Packages/binary-amd64/Packages 404 Not Found [IP: 91.189.92.191 80] W: Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/partner/Packages/binary-i386/Packages 404 Not Found [IP: 91.189.92.191 80] E: Some index files failed to download. They have been ignored, or old ones used instead.** I even tried this command line: sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype And this is what I got: **Error: need a repository as argument pandora@ubuntu:~$ sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch skype : Depends: skype-bin E: Unable to correct problems, you have held broken packages.** I just don't know what to do anymore, if I can't use skype, I might as well not use ubuntu at all. Please, someone help

    Read the article

  • Really impossible to have Gimp 2.8 on Ubuntu 11.10?

    - by ubuntico
    For days, I have been trying to find a solution and repositories to install Gimp 2.8 on Ubuntu 11.10. Each time I get this error: Tried to update pango via sudo apt-get install pango-graphite [sudo] password for xxx: Reading package lists... Done Building dependency tree Reading state information... Done pango-graphite is already the newest version. Then also tried sudo apt-get install libgdk-pixbuf2* Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libgdk-pixbuf2.0-0' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-dev' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-doc' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby1.8' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby1.8-dbg' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-common' for regex 'libgdk-pixbuf2*' libgdk-pixbuf2-ruby is already the newest version. libgdk-pixbuf2-ruby1.8 is already the newest version. libgdk-pixbuf2-ruby1.8-dbg is already the newest version. libgdk-pixbuf2.0-doc is already the newest version. libgdk-pixbuf2.0-0 is already the newest version. libgdk-pixbuf2.0-dev is already the newest version. libgdk-pixbuf2.0-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded. And still error :(. Please help as I do not want to upgrade to Ubuntu 12.04 just to have Gimp. Really a mission impossible????

    Read the article

  • Naming your website longname.com vs shortcatchy.net vs shortcatchy.info

    - by jskye
    I'm designing a website that will basically be a social network for sharing information. I have the domain $$$$d.net and the same domain $$$$d.info where $$$$ is a word (that runs into the d) pertaining to the purpose of the site . The .com of this domain was already taken, but they've got nothing showing. They only have a not reached google error showing ie. dont seem to be trying to sell it either. I also have the long name of the site $$$$------&&&&&&&&&.com where the words $$$$ and &&&&&&&&& would contribute relevant seo to the site. In fact the word $$$$------ would also if a one letter spelling mistake is recognised at all by google, which i doubt but am unsure about. But as a brandname the $$$$------ word still works relevantly. Which do you think is a better choice to use? The short catchy name with the .info for relevance to information The .net which is more familiar than .info but slightly less relevant maybe. (But i think net as in network still works cos as i said it will be a social networking site). The long, .com domain which has more SEO plus a pun albeit on a spelling mistake. I know its kind of a subjective question and also hard to answer without knowing the name (which I've obfuscated because I'm only in initial design stage) but nevertheless im interested in what some of you guys think.

    Read the article

  • SOA Specialization update

    - by Jürgen Kress
    SOA Specialization is taking off, more and more customers ask for Specialized Partners, make sure you start your own Specialization. To align the number of required Oracle Service-Oriented Architecture Certified Implementation Specialist we reduced them from 4 to 2 consultants. For details on Specialization please see SOA & Application Grid Specialization Guide  SOA & Application Grid Specialization Checklist Thanks for all the partners who became SOA Specialized in 2010! Accenture & Infosys Technologies Limited & Atos Origin & CedarCrestone, Inc. & FUJITSU & OPITZ CONSULTING GmbH& Zensar Technologies & ECS Team & Zirous Inc  Your company is missing? Make sure you add the SOA Specialization information in your solutions catalog For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Specialization,SOA,soacommunity,OPN,Oracle,Jürgen Kress,solutions catalog

    Read the article

  • "unresolvable problem" error when upgrading from 12.04 to 14.04

    - by flyingfisch
    So I have solved this issue, but now I have another problem: An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. I am not upgrading to a pre-release version of Ubuntu and I am not running a pre-release either. I have unchecked all my 3rd-party packages using Ubuntu Software Manager, EditSoftware Sources... What else might be wrong? UPDATE After doing sudo update-manager -d and sudo apt-get update;sudo apt-get dist-upgrade as per JimB's post, and then running sudo do-release-upgrade, here what I get: Err http://extras.ubuntu.com trusty/main Translation-en Err http://extras.ubuntu.com trusty/main Translation-en_US Err http://extras.ubuntu.com trusty/main Translation-en Ign http://extras.ubuntu.com trusty/main Translation-en_US Ign http://extras.ubuntu.com trusty/main Translation-en Fetched 0 B in 0s (0 B/s) Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Aug 18 23:53:10 2014) === === Command terminated with exit status 1 (Mon Aug 18 23:53:10 2014) ===

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Retrieving snapshots of game statistics

    - by SatheeshJM
    What is a good architecture for storing game statistics, so that I can retrieve snapshots of it at various moments? Say I have a game, and the user's statistics initially are: { hours_played = 0, games_played = 0, no_of_times_killed = 0, } When the user purchases something for the first time from within the game, the stats are { hours_played = 10, games_played = 2, no_of_times_killed = 5, } And when he purchases something for the second time, the stats are { hours_played = 20, games_played = 4, no_of_times_killed = 10, } Let me name the events as "purchase1" and "purchase2" How do I model my statistics, so that at any point in the future, I will be able to retrieve the snapshot of the statistics at the time when "purchase1" was fired. Similarly for "purchase2" and any other event I will add. Hope that made sense..

    Read the article

  • SOLVED BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs. SOLVED - See my comment in the "Answer" response from Tim

    Read the article

  • Oracle E-Business Suite is Helping to Save Lives at the National Marrow Donor Program

    - by Di Seghposs
    To improve the management of its life-saving operations, the National Marrow Donor Program recently modernized its financial and procurement operations by upgrading to Oracle E-Business Suite 12.1.   As the global leader in bone marrow and umbilical cord blood transplants, the NMDP manages a complex ecosystem of donor, patient, hospital, and biological data. “Maintaining accurate data and having an efficient matching process is essential, particularly as our global database of bone marrow patients grows and donor lists expand,” says Bruce Schmaltz, director of finance/controller. “We rely on the Oracle E-Business Suite to ensure our procurement and financial management processes meet the highest standards, enabling our growing non-profit to work swiftly and efficiently to help improve and save lives.” As the non-profit organization and its registry grew larger, NMDP needed a modern platform to store and integrate its financial information and complicated procurement process. It selected Oracle E-Business Suite for its ability to fit seamlessly into NMDP’s enterprise architecture. NMDP initially implemented Oracle E-Business Suite release 12 by leveraging Oracle Business Accelerators, which are rapid implementation tools and templates that help reduce implementation time and costs. With Oracle Financial Management and Oracle Procurement, NMDP has streamlined back-office processes and integrated its procure-to-pay business processes by leveraging industry leading accounts payable, accounts receivable, and general ledger modules. NMDP is currently rolling out Oracle Hyperion Performance Management applications and plans to implement Oracle Order Management and Oracle Advanced Pricing by the end of 2012. Read more details about NMDP’s modernization efforts.  For more updates on Oracle Financial Management Solutions, view our November 2012 Oracle Information InDepth Financial Management newsletter. Subscribe Now. 

    Read the article

  • UNESCO, J-ISIS, and the JavaFX 2.2 WebView

    - by Geertjan
    J-ISIS, which is the newly developed Java version of the UNESCO generalized information storage and retrieval system for bibliographic information, continues to be under heavy development and code refactoring in its open source repository. Read more about J-ISIS and its NetBeans Platform basis here. Soon a new version will be available for testing and it would be cool to see the application in action at that time. Currently, it looks as follows, though note that the menu bar is under development and many menus you see there will be replaced or removed soon: About one aspect of the application, the browser, which you can see above, Jean-Claude Dauphin, its project lead, wrote me the following: The DJ-Native Swing JWebBrowser has been a nice solution for getting a Java Web Browser for most popular platforms. But the Java integration has always produced from time to time some strange behavior (like losing the focus on the other components after clicking on the Browser window, overlapping of windows, etc.), most probably because of mixing heavyweight and lightweight components and also because of our incompetency in solving the issues. Thus, recently we changed for the JavaFX 2.2 WebWiew. The integration with Java is fine and we have got rid of all the DJ-Native Swing problems. However, we have lost some features which were given for free with the native browsers such as downloading resources in different formats and opening them in the right application. This is a pretty cool step forward, i.e., the JavaFX integration. It also confirms for me something I've heard other people saying too: the JavaFX WebView component is a perfect low threshold entry point for Swing developers feeling their way into the world of JavaFX.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Clean toshiba a300d 14r

    - by pask
    Hi all, i'm not able to open my notebook, a toshiba a300d-14r. Googling i don't find any useful information... This is official site: http://it.computers.toshiba-europe.com/innovation/jsp/SUPPORTSECTION/discontinuedProductPage.do?service=IT&PRODUCT_ID=1056713&DISC_MODEL=1#0 Anyone have an idea to open it. The fan goes crazy when power on and i suppose that is full of dusty. I need to clean the fan... Is there a guide or a useful information to do it? thanks, A

    Read the article

  • Clean toshiba a300d 14r

    - by pask
    Hi all, i'm not able to open my notebook, a toshiba a300d-14r. Googling i don't find any useful information... This is official site: http://it.computers.toshiba-europe.com/innovation/jsp/SUPPORTSECTION/discontinuedProductPage.do?service=IT&PRODUCT_ID=1056713&DISC_MODEL=1#0 Anyone have an idea to open it. The fan goes crazy when power on and i suppose that is full of dusty. I need to clean the fan... Is there a guide or a useful information to do it? thanks, A

    Read the article

  • How do you debug why Windows is slow?

    - by aaron
    I've got Vista Biz and when my machine chugs I think it is because of paging, but I never know how to verify this. Procexp doesn't seem to provide useful information because it appears that nothing is going on when the chugs happen. perfmon seems like it has the counters I need, but I'm never sure what counters I should add to cover the information I want. For perfmon, I prefer numbers that are percents, so I can gauge load. Here are the counters I have up, but they don't always seem to correlate to chugs: - % disk time (logical) - page faults/sec (an indicator of lots of paging activity) - processor/%priviliged time

    Read the article

  • Set secondary receiver in PayPal Chained Payment after the initial transaction

    - by CJxD
    I'm running a service whereby customers seek the services of 'freelancers' through our web platform. The customer will make a 'bid' which is immediately taken from their accounts as security. Once the job is completed, the customer marks it as accepted and the bid gets distributed to the freelancer(s) as a reward. After initially storing these rewards in the accounts of the freelancers and relying on MassPay to sort out paying them later, I realised that your business needs to be turning over at least £5000/month before MassPay is switched on. Instead, I was referred to Delayed Chained Payments in PayPal's Adaptive Payments API. This allows the customer to pay the primary receiver (my business) before the payment is later triggered to be sent to the secondary receivers (the freelancers). However, at the time that the customer initiates this transaction, you must understand that nobody yet knows who will receive the reward. So, before I program this whole Adaptive Payments system, is it even possible to change or add the secondary receivers after the customer has paid? If not, what can I do?

    Read the article

  • Situations that require protecting files against tampering when stored on a users computer

    - by Joel
    I'm making a 'Pokémon Storage System' with a Client/Server model and as part of that I was thinking of storing an inventory file on the users computer which I do not wish to be edited except by my program. An alternative to this would be to instead to store the inventory file on the server and control it's editing by sending commands to the server but I was wondering if there are any situations which require files to be stored on a users computer where editing would be undesirable and if so how do you protect the files? I was thinking AES with some sort of checksum?

    Read the article

  • In MVC , DAO should be called from Controller or Model

    - by tito
    I have seen various arguments against the DAO being called from the Controller class directly and also the DAO from the Model class.Infact I personally feel that if we are following the MVC pattern , the controller should not coupled with the DAO , but the Model class should invoke the DAO from within and controller should invoke the model class.Why because , we can decouple the model class apart from a webapplication and expose the functionalities for various ways like for a REST service to use our model class. If we write the DAO invocation in the controller , it would not be possible for a REST service to reuse the functionality right ? I have summarized both the approaches below. Approach #1 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); new CustomerDAO().save(customer); } } Approach #2 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); customer.save(customer); } } public class Customer { ........... private void save(Customer customer){ new CustomerDAO().save(customer); } } Note- Here is what a definition of Model is : Model: The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In event-driven systems, the model notifies observers (usually views) when the information changes so that they can react. I would need an expert opinion on this because I find many using #1 or #2 , So which one is it ?

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • How can I disable logging in Tomcat 7?

    - by WilliamMayor
    I have a Tomcat 7 server running in a VM that has very little disk space (20G). Over the course of a few days Tomcat will fill the space with logging info (usually about 15G before it runs out). I've tried turning down the log level (from INFO to SEVERE) in the logging.properties file, I've also tried sending the log info to /dev/null. It doesn't seem to work as I still get a full log directory after no time at all. Can I put a file size limit on the log files? Is something overriding the properties I'm setting? Where can I find this information? My Google Fu just returns information about logging from within an application using JULI.

    Read the article

  • Computer cables explained

    - by Robert English
    I've noticed lately that places to learn about both power supply cables and also peripherals and fans aren't that easy to find. There's very little information available that gives detailed explanations of what cables are used inside a computer. What I found was very dated and often lacked detailed explanations. For someone planning out their first build it would be great way for this to be explained all in one place, like here! Important things to know about cables and connections in a computer? What are their names? Where do they connect to and why? What typical Voltages do they output? Changing Voltages for Overclocking? Please refernce PSU cables(Full modular, Modular and Non-Modular,24-pin, 20+4-pin etc), SATA(I, II, III), Molex etc. EDIT: Forgot to mention any information about PSU rails would also be appreciated :)

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • The Start of a Blog

    - by dbradley
    So, here's my new blog up and running, who am I and what am I planning to write here?First off - here's a little about me:I'm a recent graduate from university (coming up to a year ago since I finished) studying Software Engineering on a four year course where the third year was an industrial placement. During the industrial placement I went to work for a company called Adfero in a "Technical Consultant" role as well as a junior "Information Systems Developer". Once I completed my placement I went back to complete my final year but also continued in my developer role 2/3 days a week with the company.Working part time while at uni always seems like a great idea until you get half way through the year. For me the problem was not so much having a lack of time, but rather a lack of interest in the course content having got a chance at working on real projects in a live environment. Most people who have been graduated a little while also find this - when looking back at uni work, it seem to be much more trivial from a problem solving point of view which I found to be true and I found key to uni work to actually be your ability to prove though how you talk about something that you comprehensively understand the basics.After completing uni I then returned full time to Adfero purely in the developer role which is where I've now been for almost a year and have now also taken on the title of "Information Systems Architect" where I'm working on some of the more high level design problems within the products.What I'm wanting to share on this blog is some of the interesting things I've learnt myself over the last year, the things they don't teach you in uni and pretty much anything else I find interesting! My personal favorite areas are text indexing, search and particularly good software engineering design - good design combined with good code makes the first step towards a well-written, maintainable piece of software.Hopefully I'll also be able to share a few of the products I've worked on, the mistake I've made and the software problems I've inherited from previous developers and had to heavily re-factor.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >