Search Results

Search found 56854 results on 2275 pages for 'generic error'.

Page 614/2275 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • Metro: Promises

    - by Stephen.Walther
    The goal of this blog entry is to describe the Promise class in the WinJS library. You can use promises whenever you need to perform an asynchronous operation such as retrieving data from a remote website or a file from the file system. Promises are used extensively in the WinJS library. Asynchronous Programming Some code executes immediately, some code requires time to complete or might never complete at all. For example, retrieving the value of a local variable is an immediate operation. Retrieving data from a remote website takes longer or might not complete at all. When an operation might take a long time to complete, you should write your code so that it executes asynchronously. Instead of waiting for an operation to complete, you should start the operation and then do something else until you receive a signal that the operation is complete. An analogy. Some telephone customer service lines require you to wait on hold – listening to really bad music – until a customer service representative is available. This is synchronous programming and very wasteful of your time. Some newer customer service lines enable you to enter your telephone number so the customer service representative can call you back when a customer representative becomes available. This approach is much less wasteful of your time because you can do useful things while waiting for the callback. There are several patterns that you can use to write code which executes asynchronously. The most popular pattern in JavaScript is the callback pattern. When you call a function which might take a long time to return a result, you pass a callback function to the function. For example, the following code (which uses jQuery) includes a function named getFlickrPhotos which returns photos from the Flickr website which match a set of tags (such as “dog” and “funny”): function getFlickrPhotos(tags, callback) { $.getJSON( "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?", { tags: tags, tagmode: "all", format: "json" }, function (data) { if (callback) { callback(data.items); } } ); } getFlickrPhotos("funny, dogs", function(data) { $.each(data, function(index, item) { console.log(item); }); }); The getFlickr() function includes a callback parameter. When you call the getFlickr() function, you pass a function to the callback parameter which gets executed when the getFlicker() function finishes retrieving the list of photos from the Flickr web service. In the code above, the callback function simply iterates through the results and writes each result to the console. Using callbacks is a natural way to perform asynchronous programming with JavaScript. Instead of waiting for an operation to complete, sitting there and listening to really bad music, you can get a callback when the operation is complete. Using Promises The CommonJS website defines a promise like this (http://wiki.commonjs.org/wiki/Promises): “Promises provide a well-defined interface for interacting with an object that represents the result of an action that is performed asynchronously, and may or may not be finished at any given point in time. By utilizing a standard interface, different components can return promises for asynchronous actions and consumers can utilize the promises in a predictable manner.” A promise provides a standard pattern for specifying callbacks. In the WinJS library, when you create a promise, you can specify three callbacks: a complete callback, a failure callback, and a progress callback. Promises are used extensively in the WinJS library. The methods in the animation library, the control library, and the binding library all use promises. For example, the xhr() method included in the WinJS base library returns a promise. The xhr() method wraps calls to the standard XmlHttpRequest object in a promise. The following code illustrates how you can use the xhr() method to perform an Ajax request which retrieves a file named Photos.txt: var options = { url: "/data/photos.txt" }; WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); var data = JSON.parse(xmlHttpRequest.responseText); console.log(data); }, function(xmlHttpRequest) { console.log("fail"); }, function(xmlHttpRequest) { console.log("progress"); } ) The WinJS.xhr() method returns a promise. The Promise class includes a then() method which accepts three callback functions: a complete callback, an error callback, and a progress callback: Promise.then(completeCallback, errorCallback, progressCallback) In the code above, three anonymous functions are passed to the then() method. The three callbacks simply write a message to the JavaScript Console. The complete callback also dumps all of the data retrieved from the photos.txt file. Creating Promises You can create your own promises by creating a new instance of the Promise class. The constructor for the Promise class requires a function which accepts three parameters: a complete, error, and progress function parameter. For example, the code below illustrates how you can create a method named wait10Seconds() which returns a promise. The progress function is called every second and the complete function is not called until 10 seconds have passed: (function () { "use strict"; var app = WinJS.Application; function wait10Seconds() { return new WinJS.Promise(function (complete, error, progress) { var seconds = 0; var intervalId = window.setInterval(function () { seconds++; progress(seconds); if (seconds > 9) { window.clearInterval(intervalId); complete(); } }, 1000); }); } app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { wait10Seconds().then( function () { console.log("complete") }, function () { console.log("error") }, function (seconds) { console.log("progress:" + seconds) } ); } } app.start(); })(); All of the work happens in the constructor function for the promise. The window.setInterval() method is used to execute code every second. Every second, the progress() callback method is called. If more than 10 seconds have passed then the complete() callback method is called and the clearInterval() method is called. When you execute the code above, you can see the output in the Visual Studio JavaScript Console. Creating a Timeout Promise In the previous section, we created a custom Promise which uses the window.setInterval() method to complete the promise after 10 seconds. We really did not need to create a custom promise because the Promise class already includes a static method for returning promises which complete after a certain interval. The code below illustrates how you can use the timeout() method. The timeout() method returns a promise which completes after a certain number of milliseconds. WinJS.Promise.timeout(3000).then( function(){console.log("complete")}, function(){console.log("error")}, function(){console.log("progress")} ); In the code above, the Promise completes after 3 seconds (3000 milliseconds). The Promise returned by the timeout() method does not support progress events. Therefore, the only message written to the console is the message “complete” after 10 seconds. Canceling Promises Some promises, but not all, support cancellation. When you cancel a promise, the promise’s error callback is executed. For example, the following code uses the WinJS.xhr() method to perform an Ajax request. However, immediately after the Ajax request is made, the request is cancelled. // Specify Ajax request options var options = { url: "/data/photos.txt" }; // Make the Ajax request var request = WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); }, function (xmlHttpRequest) { console.log("fail"); }, function (xmlHttpRequest) { console.log("progress"); } ); // Cancel the Ajax request request.cancel(); When you run the code above, the message “fail” is written to the Visual Studio JavaScript Console. Composing Promises You can build promises out of other promises. In other words, you can compose promises. There are two static methods of the Promise class which you can use to compose promises: the join() method and the any() method. When you join promises, a promise is complete when all of the joined promises are complete. When you use the any() method, a promise is complete when any of the promises complete. The following code illustrates how to use the join() method. A new promise is created out of two timeout promises. The new promise does not complete until both of the timeout promises complete: WinJS.Promise.join([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The message “complete” will not be written to the JavaScript Console until both promises passed to the join() method completes. The message won’t be written for 5 seconds (5,000 milliseconds). The any() method completes when any promise passed to the any() method completes: WinJS.Promise.any([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The code above writes the message “complete” to the JavaScript Console after 1 second (1,000 milliseconds). The message is written to the JavaScript console immediately after the first promise completes and before the second promise completes. Summary The goal of this blog entry was to describe WinJS promises. First, we discussed how promises enable you to easily write code which performs asynchronous actions. You learned how to use a promise when performing an Ajax request. Next, we discussed how you can create your own promises. You learned how to create a new promise by creating a constructor function with complete, error, and progress parameters. Finally, you learned about several advanced methods of promises. You learned how to use the timeout() method to create promises which complete after an interval of time. You also learned how to cancel promises and compose promises from other promises.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #003

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was the first year of my blogging and lots of new things I was learning as I go. I was indeed an infant in blogging a few years ago. However, as time passed by I have learned a lot. This year was year of experiments and new learning. 2007 Working as a full time DBA I often encoutered various errors and I started to learn how to avoid those error and document the same. ERROR Msg 5174 Each file size must be greater than or equal to 512 KB Whenever I see this error I wonder why someone is trying to attempt a database which is extremely small. Anyway, it does not matter what I think I keep on seeing this error often in industries. Anyway the solution of the error is equally interesting – just created larger database. Dilbert Humor This was very first encounter with database humor and I started to love it. It does not matter how many time we read this cartoon it does not get old. Generate Script with Data from Database – Database Publishing Wizard Generating schema script with data is one of the most frequently performed tasks among SQL Server Data Professionals. There are many ways to do the same. In the above article I demonstrated that how we can use the Database Publishing Wizard to accomplish the same. It was new to me at that time but I have not seen much of the adoption of the same still in the industry. Here is one of my videos where I demonstrate how we can generate data with schema. 2008 Delete Backup History – Cleanup Backup History Deleting backup history is important too but should be done carefully. If this is not carried out at regular interval there is good chance that MSDB will be filled up with all the old history. Every organization is different. Some would like to keep the history for 30 days and some for a year but there should be some limit. One should regularly archive the database backup history. South Asia MVP Open Days 2008 This was my very first year Microsoft MVP. I had Indeed big blast at the event and the fun was incredible. After this event I have attended many different MVP events but the fun and learning this particular event presented was amazing and just like me many others are not able to forget the same. Here are other links related to the event: South Asia MVP Open Day 2008 – Goa South Asia MVP Open Day 2008 – Goa – Day 1 South Asia MVP Open Day 2008 – Goa – Day 2 South Asia MVP Open Day 2008 – Goa – Day 3 2009 Enable or Disable Constraint  This is very simple script but I personally keep on forgetting it so I had blogged it. Till today, I keep on referencing this again and again as sometime a very little thing is hard to remember. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL 2008 – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administrative assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. SQLPASS 2009 – My Very First SQPASS Experience Just Brilliant! I never had an experience such a thing in my life. SQL SQL and SQL – all around SQL! I am listing my own reasons here in order of importance to me. Networking with SQL fellows and experts Putting face to the name or avatar Learning and improving my SQL skills Understanding the structure of the largest SQL Server Professional Association Attending my favorite training sessions Since last time I have never missed a single time this event. This event is my favorite event and something keeps me going. Here are additional post related SQLPASS 2009. SQL PASS Summit, Seattle 2009 – Day 1 SQL PASS Summit, Seattle 2009 – Day 2 SQL PASS Summit, Seattle 2009 – Day 3 SQL PASS Summit, Seattle 2009 – Day 4 2010 Get All the Information of Database using sys.databases Even though we believe that we know everything about our database, we do not know a lot of things about our database. This little script enables us to know so many details about databases which we may not be familiar with. Run this on your server today and see how much you know your database. Reducing CXPACKET Wait Stats for High Transactional Database While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. I discusses the same in this article – a bit long but insightful for sure. Error related to Database in Use There are so many database management operations in SQL Server which requires exclusive access to the database and it is not always possible to get it. When any database is online in SQL Server it either applications or system thread often accesses them. This means database can’t have exclusive access and the operations which required this access throws an error. There is very easy method to overcome this minor issue – a single line script can give you exclusive access to the database. Difference between DATETIME and DATETIME2 Developers have found the root reason of the problem when dealing with Date Functions – when data time values are converted (implicit or explicit) between different data types, which would lose some precision, so the result cannot match each other as expected. In this blog post I go over very interesting details and difference between DATETIME and DATETIME2 History of SQL Server Database Encryption I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly gave me permission to re-produce relevant part of history from his book. 2011 Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY Some time an interesting feature and smart audience make a total difference in places. From last two days, I have been writing on SQL Server 2012 feature FIRST_VALUE and LAST_VALUE. I created a puzzle which was very interesting and got many people attempt to resolve it. It was based on following two articles: Introduction to FIRST_VALUE and LAST_VALUE Introduction to FIRST_VALUE and LAST_VALUE with OVER clause I even provided the hint about how one can solve this problem. The best part was many people solved the problem without using hints! Try your luck!  A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available This is a great problem and everybody would love to have it. We had it and we loved it. Our book got out of stock in 48 hours of releasing and stocks were empty. We faced many issues and learned many valuable lessons. Some we were able to avoid in the future and some we are still facing it as those problems have no solutions. However, since that day – our books never gone out of stock. This inspiring learning story for us and I am confident that you will love to read it as well. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This function accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. I had a fantastic time writing this blog post and I am very confident when you read it, you will like the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What are your suggestions on learning how to think?

    - by Jonathan Khoo
    First of all, this is not the generic 'make me a better programmer' question, even though the outcome of asking this question might seem similar to it. On programmers.SE, I've read and seen these get closed here, here, here, here, and here. We all know there are a multitude of generic suggestions to hone your programming skills (e.g reading SO, reading recommended books, following blogs, getting involved in open-source projects, etc.). This is not what I'm after. I also acknowledge the active readership on this web site and am hoping it works in my favour by yielding some great answers. From reading correspondence here, there appears to be a vast number of experienced people who are working, or have worked, programming-related fields. And most of you can convey thoughts in an eloquent, concise manner. I've recently noticed the distinction between someone who's capable of programming and a programmer who can really think. I refuse to believe that in order to become great at programmer, we simply submit ourselves to a lifetime of sponge-like behaviour (i.e absorb everything related to our field by reading, listening, watching, etc.). I would even state that simply knowing every single programming concept that allows you to solve problem X faster than everyone around you, if you can't think, you're enormously limiting yourself - you're just a fast robot. I like to believe there's a whole other face of being a great programmer which is unrelated to how much you know about programming, but it is how well you can intertwine new concepts and apply them to your programming profession or hobby. I haven't seen anyone delve into, or address, this facet of the human mind and programming. (Yes, it's also possible that I haven't looked hard enough too - sorry if that's the case.) So for anyone who has spent any time thinking about what I've mentioned above - or maybe it's everyone here because I'm a little behind in my personal/professional development - what are your suggestions on learning how to think? Aside from the usual reading, what else have you done to be better than the other people in your/our field?

    Read the article

  • E-Business Integration with SSO using AccessGate

    - by user774220
    Moving away from the legacy Oracle SSO, Oracle E-Business Suite (EBS) came up with EBS AccessGate as the way forward to provide Single Sign On with Oracle Access Manager (OAM). As opposed to AccessGate in OAM terminology, EBS AccessGate has no specific connection with OAM with respect to configuration. Instead, EBS AccessGate uses the header variables sent from the SSO system to create the native user-session, like any other SSO enabled web application. E-Business Suite Integration with Oracle Access Manager It is a known fact that E-Business suite requires Oracle Internet Directory (OID) as the user repository to enable Single Sign On. This is due to the fact that E-Business Suite needs to be registered with OID to for Single Sign On. Additionally, E-Business Suite uses “orclguid” in OID to map the Single Sign On user with the corresponding local user profile. During authentication, EBS AccessGate expects SSO system to return orclguid and EBS username (stored as a user-attribute in SSO user store) in two header variables USER_ORCLGUID and USER_NAME respectively. Following diagram depicts the authentication flow once SSO system returns EBS Username and orclguid after successful authentication: Topic to brainstorm: EBS AccessGate as a generic SSO enablement solution for E-Business Suite AccessGate Even though EBS AccessGate is suggested as an integration approach between OAM and Oracle E-Business Suite, this section attempts to look at EBS AccessGate as a generic solution approach to provide SSO to Oracle E-Business Suite using any Web SSO solution. From the above points, the only dependency on the SSO system is that it should be able to return the corresponding orclguid from the OID which is configured with the E-Business Suite. This can be achieved by a variety of approaches: By using the same OID referred by E-Business Suite as the Single Sign On user store. If SSO System is using a different user store then: Use DIP or OIM to synch orclsguid from E-Business Suite OID to SSO user store Use OVD to provide an LDAP view where orclguid from E-Business Suite OID is part of the user entity in the user store referred by SSO System

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • How can i be sure that professional programming is not for me ?

    - by user17766
    Hello everybody I love programming, developing projects for hobby and learning new concepts. I am getting harder too much in current job Despite learnt many thing well. I can even hardly understand assigned tasks. I am asking why i am getting harder to myself. It may not my fault? Our architecture doesn't spend enough time to explain complicated sides of project for us or i am not enough smart one for understand fastly. Our architecture also doesn't know what kind of hell he is creating ? Seeing 3 level generic types and 4-5 level generic inheritance in domain model objects hell makes me think so really. It looks abusing concepts more than reduce complexity. Thinking that he hasn't experienced before such a big project while he is getting confused in problems of the project. May i am not in right company ? May i am not good programmer ? May i am really stupid ? Become good in programming concepts is not enough to deal big project's complications so someone should to tell me that i have to still effort too much even i am good programmer for adopting myself to any big project ? Also i had another bad experiences from previous job but my professional experiences is almost few months but i spend 2 years for learning and coding for fun and i really can say that i have well skills on OOP, Design Patterns, coding standards and deep knowledge in language currently used. Sometimes i am thinking to leave programming professionally and work in any lame job while doing programming just for hobby. Waiting suggestions and insights

    Read the article

  • Using a portable USB monitor in Ubuntu 13.04 (AOC e1649Fwu - DisplayLink)

    Having access to a little bit of IT hardware extravaganza isn't that easy here in Mauritius for exactly two reasons - either it is simply not available or it is expensive like nowhere. Well, by chance I came across an advert by a local hardware supplier and their offer of the week caught my attention - a portable USB monitor. Sounds cool, and the specs are okay as well. It's completely driven via USB 2.0, has a light weight, the dimensions would fit into my laptop bag and the resolution of 1366 x 768 pixels is okay for a second screen. Long story, short ending: I called them and only got to understand that they are out of stock - how convenient! Well, as usual I left some contact details and got the regular 'We call you back' answer. Surprisingly, I didn't receive a phone call as promised and after starting to complain via social media networks they finally came back to me with new units available - and *drum-roll* still the same price tag as promoted (and free delivery on top as one of their employees lives in Flic en Flac). Guess, it was a no-brainer to get at least one unit to fool around with. In worst case it might end up as image frame on the shelf or so... The usual suspects... Ubuntu first! Of course, the packing mentions only Windows or Mac OS as supported operating systems and without hesitation at all, I hooked up the device on my main machine running on Ubuntu 13.04. Result: Blackout... Hm, actually not the situation I was looking for but okay can't be too difficult to get this piece of hardware up and running. Following the output of syslogd (or dmesg if you prefer) the device has been recognised successfully but we got stuck in the initialisation phase. Oct 12 08:17:23 iospc2 kernel: [69818.689137] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 12 08:17:23 iospc2 kernel: [69818.800306] usb 2-4: device descriptor read/64, error -32Oct 12 08:17:24 iospc2 kernel: [69819.043620] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 08:17:24 iospc2 kernel: [69819.043630] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 08:17:24 iospc2 kernel: [69819.043636] usb 2-4: Product: e1649FwuOct 12 08:17:24 iospc2 kernel: [69819.043642] usb 2-4: Manufacturer: DisplayLinkOct 12 08:17:24 iospc2 kernel: [69819.043647] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 08:17:24 iospc2 kernel: [69819.046073] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 08:17:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 08:17:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 12 08:17:30 iospc2 kernel: [69825.411220] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 12 08:17:30 iospc2 kernel: [69825.498778] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 12 08:17:30 iospc2 kernel: [69825.498786] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 12 08:17:30 iospc2 kernel: [69825.498909] usbcore: registered new interface driver udl The device has been recognised as USB device without any question and it is listed properly: # lsusb...Bus 002 Device 005: ID 17e9:4107 DisplayLink ... A quick and dirty research on the net gave me some hints towards the udlfb framebuffer device for USB DisplayLink devices. By default this kernel module is blacklisted $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udl#blacklist udlblacklist udlfb and it is recommended to load it manually. So, unloading the whole udl stack and giving udlfb a shot: Oct 12 08:22:31 iospc2 kernel: [70126.642809] usbcore: registered new interface driver udlfb But still no reaction on the external display which supposedly should have been on and green. Display okay? Test run on Windows Just to be on the safe side and to exclude any hardware related defects or whatsoever - you never know what happened during delivery. I moved the display to a new position on the opposite side of my laptop, installed the display drivers first in Windows Vista (I know, I know...) as recommended in the manual, and then finally hooked it up on that machine. Tada! Display has been recognised correctly and I have a proper choice between cloning and extending my desktop. Testing whether the display is working properly - using Windows Vista Okay, good to know that there is nothing wrong on the hardware side just software... Back to Ubuntu - Kernel too old Some more research on Google and various hits recommend that the original displaylink driver has been merged into the recent kernel development and one should manually upgrade the kernel image (and both header) packages for Ubuntu. At least kernel 3.9 or higher would be necessary, and so I went out to this URL: http://kernel.ubuntu.com/~kernel-ppa/mainline/ and I downloaded all the good stuff from the v3.9-raring directory. The installation itself is easy going via dpkg: $ sudo dpkg -i linux-image-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb$ sudo dpkg -i linux-headers-3.9.0-030900_3.9.0-030900.201304291257_all.deb$ sudo dpkg -i linux-headers-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb As with any kernel upgrades it is necessary to restart the system in order to use the new one. Said and done: $ uname -r3.9.0-030900-generic And now connecting the external display gives me the following output in /var/log/syslog: Oct 12 17:51:36 iospc2 kernel: [ 2314.984293] usb 2-4: new high-speed USB device number 6 using ehci-pciOct 12 17:51:36 iospc2 kernel: [ 2315.096257] usb 2-4: device descriptor read/64, error -32Oct 12 17:51:36 iospc2 kernel: [ 2315.337105] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 17:51:36 iospc2 kernel: [ 2315.337115] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 17:51:36 iospc2 kernel: [ 2315.337122] usb 2-4: Product: e1649FwuOct 12 17:51:36 iospc2 kernel: [ 2315.337127] usb 2-4: Manufacturer: DisplayLinkOct 12 17:51:36 iospc2 kernel: [ 2315.337132] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338292] udlfb: DisplayLink e1649Fwu - serial #FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338299] udlfb: vid_17e9&pid_4107&rev_0129 driver's dlfb_data struct at ffff880117e59000Oct 12 17:51:36 iospc2 kernel: [ 2315.338303] udlfb: console enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338306] udlfb: fb_defio enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338309] udlfb: shadow enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338468] udlfb: vendor descriptor length:17 data:17 5f 01 0015 05 00 01 03 00 04Oct 12 17:51:36 iospc2 kernel: [ 2315.338473] udlfb: DL chip limited to 1500000 pixel modesOct 12 17:51:36 iospc2 kernel: [ 2315.338565] udlfb: allocated 4 65024 byte urbsOct 12 17:51:36 iospc2 kernel: [ 2315.343592] hid-generic 0003:17E9:4107.0009: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 17:51:36 iospc2 mtp-probe: checking bus 2, device 6: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 17:51:36 iospc2 mtp-probe: bus: 2, device: 6 was not an MTP deviceOct 12 17:51:36 iospc2 kernel: [ 2315.426583] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.426589] udlfb: Reallocating framebuffer. Addresses will change!Oct 12 17:51:36 iospc2 kernel: [ 2315.428338] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.428343] udlfb: set_par mode 1366x768Oct 12 17:51:36 iospc2 kernel: [ 2315.430620] udlfb: DisplayLink USB device /dev/fb1 attached. 1366x768 resolution. Using 4104K framebuffer memory Okay, that's looks more promising but still only blackout on the external screen... And yes, due to my previous modifications I swapped the blacklisted kernel modules: $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udlblacklist udl#blacklist udlfb Silly me! Okay, back to the original situation in which udl is allowed and udlfb blacklisted. Now, the logging looks similar to this and the screen shows those maroon-brown and azure-blue horizontal bars as described on other online resources. Oct 15 21:27:23 iospc2 kernel: [80934.308238] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 15 21:27:23 iospc2 kernel: [80934.420244] usb 2-4: device descriptor read/64, error -32Oct 15 21:27:24 iospc2 kernel: [80934.660822] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 15 21:27:24 iospc2 kernel: [80934.660832] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 15 21:27:24 iospc2 kernel: [80934.660838] usb 2-4: Product: e1649FwuOct 15 21:27:24 iospc2 kernel: [80934.660844] usb 2-4: Manufacturer: DisplayLinkOct 15 21:27:24 iospc2 kernel: [80934.660850] usb 2-4: SerialNumber: FJBD7HA000778Oct 15 21:27:24 iospc2 kernel: [80934.663391] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 15 21:27:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 15 21:27:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 15 21:27:25 iospc2 kernel: [80935.742407] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 15 21:27:25 iospc2 kernel: [80935.834403] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 15 21:27:25 iospc2 kernel: [80935.834416] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 15 21:27:25 iospc2 kernel: [80935.836389] usbcore: registered new interface driver udlOct 15 21:27:25 iospc2 kernel: [80936.021458] [drm] write mode info 153 Next, it's time to enable the display for our needs... This can be done either via UI or console, just as you'd prefer it. Adding the external USB display under Linux isn't an issue after all... Settings Manager => Display Personally, I like the console. With the help of xrandr we get the screen identifier first $ xrandrScreen 0: minimum 320 x 200, current 3200 x 1080, maximum 32767 x 32767LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 331mm x 207mm...DVI-0 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm   1366x768       60.0*+ and then give it the usual shot with auto-configuration. Let the system decide what's best for your hardware... $ xrandr --output DVI-0 --off$ xrandr --output DVI-0 --auto And there we go... Cloned output of main display: New kernel, new display... The external USB display works out-of-the-box with a Linux kernel > 3.9.0. Despite of a good number of resources it is absolutely not necessary to create a Device or Screen section in one of Xorg.conf files. This information belongs to the past and is not valid on kernel 3.9 or higher. Same hardware but Windows 8 Of course, I wanted to know how the latest incarnation from Redmond would handle the new hardware... Flawless! Most interesting aspect here: I did not use the driver installation medium on purpose. And I was right... not too long afterwards a dialog with the EULA of DisplayLink appeared on the main screen. And after confirmation of same it took some more seconds and the external USB monitor was ready to rumble. Well, and not only that one... but see for yourself. This time Windows 8 was the easiest solution after all. Resume I can highly recommend this type of hardware to anyone asking me. Although, it's dimensions are 15.6" it is actually lighter than my Samsung Galaxy Tab 10.1 and it still fits into my laptop bag without any issues. From now on... no more single screen while developing software on the road!

    Read the article

  • Extreme Optimization Numerical Libraries for .NET – Part 1 of n

    - by JoshReuben
    While many of my colleagues are fascinated in constructing the ultimate ViewModel or ServiceBus, I feel that this kind of plumbing code is re-invented far too many times – at some point in the near future, it will be out of the box standard infra. How many times have you been to a customer site and built a different variation of the same kind of code frameworks? How many times can you abstract Prism or reliable and discoverable WCF communication? As the bar is raised for whats bundled with the framework and more tasks become declarative, automated and configurable, Information Systems will expose a higher level of abstraction, forcing software engineers to focus on more advanced computer science and algorithmic tasks. I've spent the better half of the past decade building skills in .NET and expanding my mathematical horizons by working through the Schaums guides. In this series I am going to examine how these skillsets come together in the implementation provided by ExtremeOptimization. Download the trial version here: http://www.extremeoptimization.com/downloads.aspx Overview The library implements a set of algorithms for: linear algebra, complex numbers, numerical integration and differentiation, solving equations, optimization, random numbers, regression, ANOVA, statistical distributions, hypothesis tests. EONumLib combines three libraries in one - organized in a consistent namespace hierarchy. Mathematics Library - Extreme.Mathematics namespace Vector and Matrix Library - Extreme.Mathematics.LinearAlgebra namespace Statistics Library - Extreme.Statistics namespace System Requirements -.NET framework 4.0  Mathematics Library The classes are organized into the following namespace hierarchy: Extreme.Mathematics – common data types, exception types, and delegates. Extreme.Mathematics.Calculus - numerical integration and differentiation of functions. Extreme.Mathematics.Curves - points, lines and curves, including polynomials and Chebyshev approximations. curve fitting and interpolation. Extreme.Mathematics.Generic - generic arithmetic & linear algebra. Extreme.Mathematics.EquationSolvers - root finding algorithms. Extreme.Mathematics.LinearAlgebra - vectors , matrices , matrix decompositions, solvers for simultaneous linear equations and least squares. Extreme.Mathematics.Optimization – multi-d function optimization + linear programming. Extreme.Mathematics.SignalProcessing - one and two-dimensional discrete Fourier transforms. Extreme.Mathematics.SpecialFunctions

    Read the article

  • How do I enable sound with the "linux-virtual" kernel?

    - by Ola Tuvesson
    I've been trying to enable sound for the linux-virtual kernel as I want to run an ultra slim Ubuntu server under VirtualBox but need audio. The resource usage difference between virtual and generic/server is surprisingly large, with the virtual kernel system using 80Mb less RAM after a clean boot (130Mb vs 210Mb), and I really want to squeeze every clock cycle and available byte I can out of the system. Besides, the virtual kernel has some additional optimisations enabled specifically for virtual machines (or so I am told). Now I have compiled my own kernel a few times in the past, for example to include the Intel-PHC module (for improved power management on Thinkpads), so the concept is not entirely alien to me, but I've run into a strange problem which I'm hoping someone can help explain: When I do a diff between the config files for Linux-generic and Linux-virtual there are precious few differences, and certainly none which pertain to sound support; there are really only five or six lines which differ, and they're mainly to do with i/o timing, sleep state and priorities. What gives? I expected the differences to be extensive, and that I would be able to identify the options that enabled audio by looking at them, but my problem doesn't seem to be related to the config file at all (yes, I know about the sound drivers section - it is identical between the two kernel configs). Am I looking in the wrong place? Many thanks!

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • On the frontier between work and home

    - by MPelletier
    I think we've all been there: You hear of someone say "hey, wouldn't it be nice if platform X had feature Y?" You look around (on SO!), the feature really doesn't exist, even though it probably would be useful in many contexts. So it's pretty generic. Your mind wanders for a bit. "How tough would it be? Well, it'd probably be just a snippet. And an ad-hoc function. And maybe a wrapper." And boom, before you know it, you've spent a dozen hours of your free time implementing a FooFeature that's really neat and generic. The kind of code you might not even have the time to spit and shine at work, that would be a bit rushed and not so documented. So now you wonder "wouldn't this be useful to others?" And you've got your blog, maybe a CodeProject account, and your colleague who asked if FooFeature exists might, haphazardly, come accross that blog entry, had it existed before they told you. On the otherhand, the NDA agreement. It's sort of vague and general. It doesn't forbid you from coding at home, but it's clear on sharing company code, that's a big NO. But this isn't company code. Or is it? Or will it be? So, what do you do with code (that's more than just a snippet) you wrote in your off time with universality in mind but an idea that came from work, and that will most likely be used at work? Can it be published?

    Read the article

  • jQuery CSS Property Monitoring Plug-in updated

    - by Rick Strahl
    A few weeks back I had talked about the need to watch properties of an object and be able to take action when certain values changed. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects. One example is a drop shadow - if I add a shadow behavior to an object I want the shadow to be pinned to that object so when that object moves I also want the shadow to move with it, or when the panel is hidden the shadow should hide with it - automatically without having to explicitly hook up monitoring code to the panel. For example, in my shadow plug-in I can now do something like this (where el is the element that has the shadow attached and sh is the shadow): if (!exists) // if shadow was created el.watch("left,top,width,height,display", function() { if (el.is(":visible")) $(this).shadow(opt); // redraw else sh.hide(); }, 100, "_shadowMove"); The code now monitors several properties and if any of them change the provided function is called. So when the target object is moved or hidden or resized the watcher function is called and the shadow can be redrawn or hidden in the case of visibility going away. So if you run any of the following code: $("#box") .shadow() .draggable({ handle: ".blockheader" }); // drag around the box - shadow should follow // hide the box - shadow should disappear with box setTimeout(function() { $("#box").hide(); }, 4000); // show the box - shadow should come back too setTimeout(function() { $("#box").show(); }, 8000); This can be very handy functionality when you're dealing with objects or operations that you need to track generically and there are no native events for them. For example, with a generic shadow object that attaches itself to any another element there's no way that I know of to track whether the object has been moved or hidden either via some UI operation (like dragging) or via code. While some UI operations like jQuery.ui.draggable would allow events to fire when the mouse is moved nothing of the sort exists if you modify locations in code. Even tracking the object in drag mode this is hardly generic behavior - a generic shadow implementation can't know when dragging is hooked up. So the watcher provides an alternative that basically gives an Observer like pattern that notifies you when something you're interested in changes. In the watcher hookup code (in the shadow() plugin) above  a check is made if the object is visible and if it is the shadow is redrawn. Otherwise the shadow is hidden. The first parameter is a list of CSS properties to be monitored followed by the function that is called. The function called receives this as the element that's been changed and receives two parameters: The array of watched objects with their current values, plus an index to the object that caused the change function to fire. How does it work When I wrote it about this last time I started out with a simple timer that would poll for changes at a fixed interval with setInterval(). A few folks commented that there are is a DOM API - DOMAttrmodified in Mozilla and propertychange in IE that allow notification whenever any property changes which is much more efficient and smooth than the setInterval approach I used previously. On browser that support these events (FireFox and IE basically - WebKit has the DOMAttrModified event but it doesn't appear to work) the shadow effect is instant - no 'drag behind' of the shadow. Running on a browser that doesn't support still uses setInterval() and the shadow movement is slightly delayed which looks sloppy. There are a few additional changes to this code - it also supports monitoring multiple CSS properties now so a single object can monitor a host of CSS properties rather than one object per property which is easier to work with. For display purposes position, bounds and visibility will be common properties that are to be watched. Here's what the new version looks like: $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 200; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var itId = null; var data = { id: id, props: props.split(","), func: func, vals: [props.split(",").length], fnc: fnc, origProps: props, interval: interval }; $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data.fnc); }); function hookChange(el$, id, fnc) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, fnc); else itId = setInterval(fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w.fnc); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var fnc = el.data(id).fnc; try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, fnc); else clearInterval(id); } // ignore if element was already unbound catch (e) { } }); return this; } There are basically two jQuery functions - watch and unwatch. jQuery.fn.watch(props,func,interval,id) Starts watching an element for changes in the properties specified. props The CSS properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func (watchData,index) The function fired in response to a changed property. Receives this as the element changed and object that represents the watched properties and their respective values. The first parameter is passed in this structure:    { id: itId, props: [], func: func, vals: [] }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property value that has changed. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. jQuery.fn.unwatch(id) Unhooks watching of the element by disconnecting the event handlers. id Optional watcher id that was specified in the call to watch. This value can be omitted to use the default value of _watcher. You can also grab the latest version of the  code for this plug-in as well as the shadow in the full library at: http://www.west-wind.com:8080/svn/jquery/trunk/jQueryControls/Resources/ww.jquery.js watcher has no other dependencies although it lives in this larger library. The shadow plug-in depends on watcher.© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Gnome-Network-Manager Config File Migration

    - by Jorge
    I think I have an issue with gnome-network-manager, I used to have a lot of connections configured, Wireless, Wired and VPN. After upgrading to 12.04 (from 11.10) I lost every configuration. I realized that the configs that used to be saved in $HOME/.gconf/system/networking/connections now are being saved in /etc/NetworkManager/system-connections/. I don't know how to migrate my settings to the new config file format Can anybody help me? jorge@thinky:~$ sudo lshw -C network *-network description: Ethernet interface product: 82566MM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:1f:e2:14:5a:9b capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.5.1-k firmware=0.3-0 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:fe000000-fe01ffff memory:fe025000-fe025fff ioport:1840(size=32) *-network description: Wireless interface product: PRO/Wireless 4965 AG or AGN [Kedron] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 61 serial: 00:21:5c:32:c2:e5 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwl4965 driverversion=3.2.0-23-generic-pae firmware=228.61.2.24 ip=192.168.2.103 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn resources: irq:47 memory:df3fe000-df3fffff jorge@thinky:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise jorge@thinky:~$ uname -a Linux thinky 3.2.0-23-generic-pae #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012 i686 i686 i386 GNU/Linux jorge@thinky:~$ dpkg -l | grep -i firm ii linux-firmware 1.79 Firmware for Linux kernel drivers

    Read the article

  • B2B communication using IBM MQ

    - by Dheeraj Kumar M
    Oracle B2B 11g, provides the out-of-the box ability to connect to IBM MQ to exchange the message. This is support is provided via JMS offering of Oracle B2B. This is an addition to the stack of existing communication capabilities of B2B with trading partners. There are 2 ways of connecting to IBM MQ using B2B 1. Credential based connectivity 2. .bindings based connectivity As a pre-requisite to connect to IBM MQ, it is required to provide the following libraries in classpath: a. com.ibm.mqjms.jar b. dhbcore.jar c. com.ibm.mq.jar d. com.ibm.mq.jmqi.jar e. mqcontext.jar f. com.ibm.mq.pcf.jar g. com.ibm.mq.commonservices.jar h. com.ibm.mq.headers.jar i. fscontext.jar j. jms.jar Add the above jars into domain library directory and the directory usually located at $DOMAIN_DIR/lib. The jars located in this($DOMAIN_DIR/lib) directory will be picked up and added dynamically to the end of the server classpath at server startup. For eg. /user_projects/domains//lib/ Alternatively the above jar’s can also be added as part of the setDomainEnv.sh Credential based connectivity : Outbound: : Configure the trading partner delivery channel for using "Generic JMS" protocol Inbound: : Configure the internal delivery channel for using "Generic JMS" protocol with the following details: Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=<host>:<QM Listen port>/<MQ Channel Name>; User NameMQ User Name passwordMQ password .bindings based connectivity As a pre-requisite, get/generate the .bindings file in MQServer. This can be done by MQ Administrator Set the following values in the respective delivery channel for outbound / inbound Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=file:///<location of .bindings file>;

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

  • The problems with Avoiding Smurf Naming classes with namespaces

    - by Daniel Koverman
    I pulled the term smurf naming from here (number 21). To save anyone not familiar the trouble, Smurf naming is the act of prefixing a bunch of related classes, variables, etc with a common prefix so you end up with "a SmurfAccountView passes a SmurfAccountDTO to the SmurfAccountController", etc. The solution I've generally heard to this is to make a smurf namespace and drop the smurf prefixes. This has generally served me well, but I'm running into two problems. I'm working with a library with a Configuration class. It could have been called WartmongerConfiguration but it's in the Wartmonger namespace, so it's just called Configuration. I likewise have a Configuration class which could be called SmurfConfiguration, but it is in the Smurf namespace so that would be redundant. There are places in my code where Smurf.Configuration appears alongside Wartmonger.Configuration and typing out fully qualified names is clunky and makes the code less readable. It would be nicer to deal with a SmurfConfiguration and (if it was my code and not a library) WartmongerConfiguration. I have a class called Service in my Smurf namespace which could have been called SmurfService. Service is a facade on top of a complex Smurf library which runs Smurf jobs. SmurfService seems like a better name because Service without the Smurf prefix is so incredibly generic. I can accept that SmurfService was already a generic, useless name and taking away smurf merely made this more apparent. But it could have been named Runner, Launcher, etc and it would still "feel better" to me as SmurfLauncher because I don't know what a Launcher does, but I know what a SmurfLauncher does. You could argue that what a Smurf.Launcher does should be just as apparent as a Smurf.SmurfLauncher, but I could see `Smurf.Launcher being some kind of class related to setup rather than a class that launches smurfs. If there is an open and shut way to deal with either of these that would be great. If not, what are some common practices to mitigate their annoyance?

    Read the article

  • E-Business Suite - Cloning Basics & AMP Cloning - EMEA/APAC

    - by Annemarie Provisero
    ADVISOR WEBCAST: E-Business Suite - Cloning Basics & AMP Cloning - EMEA/APAC PRODUCT FAMILY: EBS – ATG - Utilities July 19, 2011 at 10:00 am CET, 01:30 pm India, 05:00 pm Japan, 06:00 pm Australia This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Cloning functionality available in the E-Business Suite Release. We are going to talk about the generic Cloning options and will then go into depth about the cloning scenario when using AMP (Applications Management Pack) within the Enterprise Manager. TOPICS WILL INCLUDE: Cloning Overview Rapidclone steps in Details Rapidclone limitations EM Grid Setup with AMP for Cloning Advantages of Cloning with AMP Cloning Procedures available with AMP Monitoring Clone Operation Few things to remember before Cloning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • 12.04.1 no audio through HDMI

    - by JoJo
    having a bit of an issue with getting audio to go through HDMI. Here are the base specs: OS: Ubuntu Desktop 12.04.1 x64 CPU: AMD A10-5800K 3.8G 4M FM2 R Mobo: MSI FM2-A75MA-E35 OS: Ubuntu 12.04.1 LTS Vid Card: (integrated on CPU) AMD Radeon HD 7660D HDMI sound works fine under Win7 (after mobo and vid drivers are installed), so it's not physically broken. Audio through the normal headphone jacks works fine under Ubuntu. Looking at the audio panel, there is no HDMI output at all. aplay -l also reports only: card 0: Generic [HD-Audio Generic], device 0: ALC887-VD Analog [ALC887-VD Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 In additional drivers there are two versions: ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER (post-release update) The first installs, but problem persists. I do get more resolutions to pick from. Second version does not, reporting it failed installation and to find details at: /var/log/jockey.log Looked at the log, and it's insanely long, if necessary I can get it to you guys. Did more research and some had luck by manually installing the drivers, so tried to give that a shot by following this: https://help.ubuntu.com/community/BinaryDriverHowto/ATI#Manually_installing_Catalyst_12.6 starting at 3.1 Manually installing Catalyst 12.6. I immediately had 2 issues, (1) the AMD website does not provide any drivers for Linux, and (2) the following command did not work: sudo sh amd-driver-installer-12-6-x86.x86_64.run --buildpkg Ubuntu/precise sh: 0: Can't open amd-driver-installer-12-6-x86.x86_64.run Some other posts stated to update "alsa-drivers", but that also did not work as install command for the new version of them did not work. I forget the exact issue, but similar to above, cannot open / cannot find. Any help would be greatly appreciated!

    Read the article

  • Second monitor not detected

    - by configurator
    Note: I've seen this question quite a lot, but in all the cases I could find with answers, the answer was either "I don't know" or "use nvidia-settings (which is irrelevant to me)." I'm using Intel Sandybridge Desktop graphics, with a P8H61-M LE motherboard. How do I get Ubuntu to detect my second monitor? Clicking "Detect Displays" here doesn't do anything. Here's some system info: $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) $ uname -a Linux clyde 3.5.0-13-generic #13-Ubuntu SMP Tue Aug 28 08:31:47 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ hardinfo [copied from the UI] -Display- Resolution : 1920x1080 pixels Vendor : The X.Org Foundation Version : 1.12.3 -Monitors- Monitor 0 : 1920x1080 pixels -Extensions- BIG-REQUESTS Composite DAMAGE DOUBLE-BUFFER DPMS DRI2 GLX Generic Event Extension MIT-SCREEN-SAVER MIT-SHM RANDR RECORD RENDER SECURITY SGI-GLX SHAPE SYNC X-Resource XC-MISC XFIXES XFree86-DGA XFree86-VidModeExtension XINERAMA XInputExtension XKEYBOARD XTEST XVideo -OpenGL- Vendor : Intel Open Source Technology Center Renderer : Mesa DRI Intel(R) Sandybridge Desktop Version : 3.0 Mesa 8.1-devel Direct Rendering : Yes I've tried upgrading everything from ppa:xorg-edgers/ppa and ppa:glasen/intel-driver. I've also installed various tools I've found in other threads (e.g. hardinfo) but they weren't really helpful to me as I don't know what to make of the data. How do I get Ubuntu to detect my second monitor?

    Read the article

  • Optimization ended up in casting an object at each method call

    - by Aybe
    I've been doing some optimization for the following piece of code : public void DrawLine(int x1, int y1, int x2, int y2, int color) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color); } After profiling it about 70% of the time spent was in getting a context for drawing and disposing it. I ended up sketching the following overload : public void DrawLine(int x1, int y1, int x2, int y2, int color, BitmapContext bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, bitmapContext); } Until here no problems, all the user has to do is to pass a context and performance is really great as a context is created/disposed one time only (previously it was a thousand times per second). The next step was to make it generic in the sense it doesn't depend on a particular framework for rendering (besides .NET obvisouly). So I wrote this method : public void DrawLine(int x1, int y1, int x2, int y2, int color, IDisposable bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, (BitmapContext)bitmapContext); } Now every time a line is drawn the generic context is casted, this was unexpected for me. Are there any approaches for fixing this design issue ? Note : _bitmap is a WriteableBitmap from WPF BitmapContext is from WriteableBitmapEx library DrawLineBresenham is an extension method from WriteableBitmapEx

    Read the article

  • E-Business Suite - Cloning Basics & AMP Cloning - US

    - by Annemarie Provisero
    ADVISOR WEBCAST: E-Business Suite - Cloning Basics & AMP Cloning - US PRODUCT FAMILY: EBS – ATG - Utilities July 20, 2011 at 17:00 UK / 18:00 CET / 09:00 am Pacific / 10:00 am Mountain / 12:00 Eastern This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Cloning functionality available in the E-Business Suite Release. We are going to talk about the generic Cloning options and will then go into depth about the cloning scenario when using AMP (Applications Management Pack) within the Enterprise Manager. TOPICS WILL INCLUDE: Cloning Overview Rapidclone steps in Details Rapidclone limitations EM Grid Setup with AMP for Cloning Advantages of Cloning with AMP Cloning Procedures available with AMP Monitoring Clone Operation Few things to remember before Cloning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Reboot failure after upgrade from 8.04 LTS to 10.04 LTS

    - by Alan Fietz
    I bought our computer from Freegeeks with Ubuntu 8.04 installed. I upgraded from Ubuntu 8.04 to 10.04 on Thursday November 10. I have an ASUS P4P800SE with dual Intel P4@3GHZ. Installation messages were: - Error loading Nautilus config info - Replaced customied /etc/login.defs - Replaced customized /etc/dhcp3/dhclient.conf - 189 packages removed - WARNING: Failed to read mirror file When I rebooted, the usual ASUS screen appeared, then "Loading GRUB" then "starting Up..." then "starting Up..." again then a blank screen (the moniter went dormant). I rebooted, started GRUB and selected: version 10.04.3 LTS kernel 2.6.32-35 generic I got the same results. I rebooted, started GRUB and selected: kernel 2.6.24-29 generic Here's what was displayed: udevd [875]: error getting socket: Invalid argument libudev:udev_monitor_new_from_netlink: error getting socket: Invalid argument Segmentation fault **Gave up waiting for root device** Common problems - Boot args (cat/proc/cmdline) - Check root delay - check root - Missing modules (cat/pro/modules; **Alert! /dev/disk/by_vvid/c59c6361 etc... does not exist. Dropping to a shell.** Then Busybox v1.13.3 started with the following prompt (?) (initramfs) _ But my typing did not appear on the screen. It appears the hard drive cannot be found. Any suggestion on how to remedy this? Thank you.

    Read the article

  • Remap keyboard Ubuntu 12.04; Asus Q500A

    - by hydroxide
    I have an Asus Q500A with win8 and Ubuntu 12.04 64 bit; linux kernel 3.8.0-32-generic. I am using gnome-panel, and xserver-xorg-lts-raring. I have been experiencing problems with the keyboard short-cuts since I had a fresh install. fn+f10 is supposed to mute my system, but instead it will repeatedly press d. fn+f11 is volume down, but it presses c. fn+f12 is volume up, presses b repeatedly. Most of the other on-board short-cuts such as adjusting screen and led brightness work most of the time, but sometimes press other letters repeatedly. Also, sometimes my cntr gets held down for no reason. Everything works fine in windows. I have tried installing all recommends and sudo dpkg-reconfigure -a to reconfigure all packages, which did not solve my problem. I have tried using KeyTouch editor to edit keymaps, navigating to /usr/shar/x11/xkb/keymap when I try opening any of these files it says file contains no keyboard element. I think If I were just able to remap my keyboard it might solve my issues, otherwise if anyone knows where I can get asus drivers for 12.04 please let me know Apparently I didn't have all repositories enabled. I executed the following commands and am trying the updates they give me. Getting linux_kernel 3.8.0-33 generic as well as a bunch of other packages. sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse" sudo add-apt-repository "deb http://archive.canonical.com/ubuntu $(lsb_release -sc) partner"

    Read the article

  • ATI Radeon HD 6870 Driver fails to install default-policy.sh does not support version

    - by Rogue Coder
    I'm running Ubuntu 11.04 Beta, with everything updated completely. I'm using Ubuntu Classic, because Unity fails to run, supposedly because of my video card. The drivers for the Radeon HD 6870 series is apparently lacking, but I found a post stating the newest version has full support for Ubuntu Natty Narwhal. That post is slightly old, so i grabbed 11.3 for Ubuntu x86 off the ATI website. When I run the installation program, I receive the following error: > ./ati-driver-installer-11-3-x86.x86_64.run Created directory fglrx-install.uREFoO Verifying archive integrity... All good. Uncompressing ATI Catalyst(TM) Proprietary Driver-8.831.2......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro Removing temporary directory: fglrx-install.uREFoO > I would love to get the latest ATI drivers working so that I can try out Unity!

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >