Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 352/692 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Is there any good reason for private methods existence in C# (and OOP in general)?

    - by Piotr Lopusiewicz
    I don't mean to troll but I really don't get it. Why would language designers allow private methods instead of some naming convention (see __ in Python) ? I searched for the answer and usual arguments are: a) To make the implementation cleaner/avoid long vertical list of methods in IDE autocompletion b) To announce to the world which methods are public interface and which may change and are just for implementation purpose c) Readability Ok so now, all of those could be achieved by naming all private methods with __ prefix or by "private" keyword which doesn't have any implications other than be information for IDE (don't put those in autocompletion) and other programers (don't use it unless you really must). Hell, one could even require unsafe-like keyword to access private methods to really discourage this. I am asking this because I work with some c# code and I keep changing private methods to public for test purposes as many in-between private methods (like string generators for xml serialization) are very useful for debugging purposes (like writing some part of string to log file etc.). So my question is: Is there anything which is achieved by access restriction but couldn't be achieved by naming conventions without restricting the access ?

    Read the article

  • What is the "x = x || {}" technique in JavaScript - and how does it affect this IIFE?

    - by Micky Hulse
    First, a pseudo code example: ;(function(foo){ foo.init = function(baz) { ... } foo.other = function() { ... } return foo; }(window.FOO = window.FOO || {})); Called like so: FOO.init(); My question: What is the technical name/description of: window.FOO = window.FOO || {}? I understand what the code does... See below for my reason(s) for asking. Reason for asking: I'm calling the passed in global like so: ;(function(foo){ ... foo vs. FOO, anyone else potentially confused? ... }(window.FOO = window.FOO || {})); ... but I just don't like calling that lowercase "foo", considering that the global is called capitalized FOO... It just seems confusing. If I knew the technical name of this technique, I could say: ;(function(technicalname){ ... do something with technicalname, not to be confused with FOO ... }(window.FOO = window.FOO || {})); I've seen a recent (awesome) example where they called it "exports": ;(function(exports){ ... }(window.Lib = window.Lib || {})); I guess I'm just trying to standardize my coding conventions... I'd like to learn what the pros do and how they think (that's why I'm asking here)!

    Read the article

  • PHP Deleting a file 24 hours after being uploaded

    - by user3742063
    I've made a simple script that allows users to upload html files to a web directory on my server. However, I'd like it so each file is deleted after 24 hours of being on my server. 24 hours for each file, not 24 hours for the entire directory. Here is my code so far... Thank you for your help. :) <?php $target = "users/"; $target = $target . basename( $_FILES['uploaded']['name']) ; $ok=1; if(move_uploaded_file($_FILES['uploaded']['tmp_name'], $target) && ($_FILES["uploaded"]["type"] == "html")) { echo "File: " . $_FILES["uploaded"]["name"] . "<br />"; echo "Type: " . $_FILES["uploaded"]["type"] . "<br />"; echo "Size: " . ($_FILES["uploaded"]["size"] / 1024) . " Kb<br />"; echo "Location: /users/" . $_FILES["uploaded"]["name"]; } else { echo "Sorry, " . $_FILES["uploaded"]["name"] . " is not a valid HTML document. Please try again."; unlink . $_FILES["uploaded"]["name"]; } ?>

    Read the article

  • What are advantages of using a one-to-one table relationship? (MySQL)

    - by byronh
    What are advantages of using a one-to-one table relationship as opposed to simply storing all the data in one table? I understand and make use of one-to-many, many-to-one, and many-to-many all the time, but implementing a one-to-one relationship seems like a tedious and unnecessary task, especially if you use naming conventions for relating (php) objects to database tables. I couldn't find anything on the net or on this site that could supply a good real-world example of a one-to-one relationship. At first I thought it might be logical to separate 'users', for example, into two tables, one containing public information like an 'about me' for profile pages and one containing private information such as login/password, etc. But why go through all the trouble of using unnecessary JOINS when you can just choose which fields to select from that table anyway? If I'm displaying the user's profile page, obviously I would only SELECT id,username,email,aboutme etc. and not the fields containing their private info. Anyone care to enlighten me with some real-world examples of one-to-one relationships?

    Read the article

  • How can I call these urls in jquery to display content on one page?

    - by Thorbis Website Design
    ok I figured out the jquery part but not the parameters of them all can anyone help figure out the parameters for each url string? this is the jquery I figured out! also would this work better then what the below answer? $.get('adminajax.php', {'action':'getUsers'}, function(data){ $('#users .users').html(data); }); He sent me this in an email: You can specify a page by adding: p=[page #] You can specify a file and it will add a checkbox next to the user which will be checked if the user has permission to download: file=[file location] adminajax.php?action=createDirectory&directory=[new directory location] adminajax.php?action=setAvailability&user=[username]&file=[filelocation]&available=[true or false] I'm trying to get it to display in these html tags: <div id="files"> <b>Files:</b> <ul class="files"></ul> </div> <div id="file_options"> <b>Options:</b> </div> <div id="users"> <b>Users:</b> <ul class="users"></ul> </div>

    Read the article

  • How can I get sikuli-ide to work?

    - by ayckoster
    I installed sikuli-ide with sudo apt-get install sikuli-ide Everything was fine until I tried to start it from the terminal. I typed sikuli-ide But the only response I got was [info] locale: en_US The application was not started, furthermore there is no desktop file and sikuli-ide does not show up in Dash Home. I guess there is something wrong with the package. I run Ubuntu 12.10 64bit. I tried to install it (Sikuli-X-1.0rc3 (r905)-linux-x86_64.zip) from their page, now the IDE starts, but when I try to execute a simple script I get the following error: [error] Stopped [error] An error occurs at line 1 [error] Error message: Traceback (most recent call last): File "", line 1, in File "/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py", line 3, in File "/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/Sikuli.py", line 22, in java.lang.UnsatisfiedLinkError: /home/ayckoster/opt/Sikuli-IDE/libs/libVisionProxy.so: libml.so.2.1: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1935) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1860) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1821) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at com.wapmx.nativeutils.jniloader.NativeLoader.loadLibrary(NativeLoader.java:44) at org.sikuli.script.Finder.(Finder.java:33) at java.lang.Class.forName0(Native Method) at java.lang. Class.forName(Class.java:264) at org.python.core.Py.loadAndInitClass(Py.java:895) at org.python.core.Py.findClassInternal(Py.java:830) at org.python.core.Py.findClassEx(Py.java:881) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:133) at org.python.core.packagecache.PackageManager.findClass(PackageManager.java:28) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:122) at org.python.core.PyJavaPackage.__findattr_ex__(PyJavaPackage.java:137) at org.python.core.PyObject.__findattr__(PyObject.java:863) at org.python.core.imp.import_name(imp.java:849) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importFromAs(imp.java:978) at org.python.core.imp.importFrom(imp.java:954) at sikuli.Sikuli$py.f$0(/home/ayckoster/opt/Sikuli-IDE/siku li-script.jar/Lib/sikuli/Sikuli.py:211) at sikuli.Sikuli$py.call_function(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/Sikuli.py) at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.imp.createFromCode(imp.java:386) at org.python.core.util.importer.importer_load_module(importer.java:109) at org.python.modules.zipimport.zipimporter.zipimporter_load_module(zipimporter.java:161) at org.python.modules.zipimport.zipimporter$zipimporter_load_module_exposer.__call__(Unknown Source) at org.python.core.PyBuiltinMethodNarrow.__call__(PyBuiltinMethodNarrow.java:47) at org.python.core.imp.loadFromLoader(imp.java:513) at org.python.core.imp.find_module(imp.java:467) at org.python.core.PyModule.impAttr(PyModule.java:100) at org.python.core.imp.import_next(imp.java:715) at org.python.core.imp.import_name(imp.java:824) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importAll(imp.java:998) at sikuli$py.f$0(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py:3) at sikuli$py.call_function(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py) at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.imp.createFromCode(imp.java:386) at org.python.core.util.importer.importer_load_module(importer.java:109) at org.python.modules.zipimport.zipimporter.zipimporter_load_module(zipimporter.java:161) at org.python.modules.zipimport.zipimporter$zipimporter_load_module_exposer.__call__(Unknown Source) at org.python.core.PyBuiltinMethodNarrow.__call__(PyBuiltinMethodNarrow.java:47) at org.python.core.imp.loadFromLoader(imp.java:513) at org.python.core.imp.find_module(imp.java:467) at org.python.core.imp.import_next(imp.java:713) at or g.python.core.imp.import_name(imp.java:824) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importAll(imp.java:998) at org.python.pycode._pyx2.f$0(:1) at org.python.pycode._pyx2.call_function() at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.Py.runCode(Py.java:1261) at org.python.core.Py.exec(Py.java:1305) at org.python.util.PythonInterpreter.exec(PythonInterpreter.java:206) at org.sikuli.script.ScriptRunner.runPython(ScriptRunner.java:61) at org.sikuli.ide.SikuliIDE$ButtonRun.runPython(SikuliIDE.java:1572) at org.sikuli.ide.SikuliIDE$ButtonRun$1.run(SikuliIDE.java:1677) java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: /home/ayckoster/opt/Sikuli-IDE/libs/libVisionProxy.so: libml.so.2.1: cannot open shared object file: No such file or directory If I try to use the click() method from the gui it fails. So I created my own click method and it look like this: This cannot be executed and produces the error above.

    Read the article

  • SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion

    - by pinaldave
    Microsoft Tech-Ed India 2010 is considered as the major Technology event of the year for various IT professionals and developers. This event will feature a comprehensive forum in order   to learn, connect, explore, and evolve the current technologies we have today. I would recommend this event to you since here you will learn about today’s cutting-edge trends, thereby enhancing your work profile and getting ahead of the rest. But, the most important benefit of all might be the networking opportunity that that you can attain by attending the forum. You can build personal connections with various Microsoft experts and peers that will last even far beyond this event! It also feels good to let you know that I will be speaking at this year’s event! So, here are the sessions that await you in this mega-forum. Session 1: True Lies of SQL Server – SQL Myth Buster Date: April 12, 2010  Time: 11:15pm – 11:45pm In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myth and their resolution backing up with some demo. This demo session is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet  fun session. Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Panel Discussion: Harness the power of Web – SEO and Technical Blogging Date: April 12, 2010 Time: 5:00pm-6:00pm Here you will learn lots of tricks and tips about SEO and Technical Blogging from various Industry Technical Blogging Experts. This event will surely be one of the most important Tech conventions of 2010. TechEd is going to be a very busy time for Tech developers and enthusiasts, since every evening there will be a fun session to attend. If you are interested in any of the above topics for every session, I suggest that you visit each of them as you will learn so many things about the topic to be discussed. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Book Review: Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

    - by olaf.heimburger
    When you are familar with the Oracle Database and Middleware stack, chances are that you came across the Enterprise Manager. It comes in many versions for the database or the middleware and differs in its features. If meet someone who talks about Enterprise Manager, it might be possible that this person is talking about something completely different - Enterprise Manager Grid Control. Enterprise Manager Grid Control is the Oracle product for the data center that monitors all databases - and middleware components as well as operating systems. Since the database part is taken for granted, is needs some additional steps to get into the world of centralized middleware management. That's what this book is for - bringing you in the world of middleware management. The Authors This book is written by Debu Panda, former Product Management Director of the Oracle Fusion Middleware Management development team, and Arvind Maheshwari, Senior Software Development Manager of the Oracle Enterprise Manager development team. The Book Oracle Enterprise Manager conceptionally works for many different management areas. As a user you often think of managing databases with it. This is a wide area and deserves another book. The least known area is the middleware management and that's what the booked aimes for. The first 3 chapters cover the key features of Enterprise Manager Grid Control, Installing Enterprise Manager Grid Control, and Enterprise Manager Key Concepts and Subsystems. The foundation you need to understand the whole software and the following chapters. Read them in order and you are well prepared for the next 10 chapters on managing the various bits and pieces in your data center. The list of bits and pieces is always a surprise, no matter how often you open the book. You can manage Oracle WebLogic Server, Oracle Application Server, Oracle Forms and Reports Services, SOA Suite 10g, Oracle Service Bus 10g, Oracle Internet Directory, Oracle Virtual Directory, Oracle Access Manager, Oracle Identity Manager, Oracle Identity Federation, Oracle Coherence Cluster, Non-Oracle Middleware like Apache, Tomcat, JBoss, OBM WebSphere and much much more. The chapters for these components can be read in any order you like, you only need the foundation chapters and continue with the parts in your data center. Once you are done with them, don't forget to read the last chapter, Best Practices for Managing Middleware Components using Enterprise Manager. Read it, understand it, and implement it in your organization. This will save you valueable time and budget. Recommendation This book is mainly written for the Enterprise Manager newbies and saves you a lot of time while going through the standard product documentation. All chapters are considerable short and tell exactly what need to know to get started with. Nothing more and nothing less. That's the beauty of it and why I love it. Due to its limitation it will cover everything you'd like to know, but it gets you started and interested for more insights. But that is the job of the product documentation. The Details Title Middleware Management with Oracle Enterprise Manager Grid Control 10g R5 Authors Debu Panda and Arvind Maheshwari Paperback 310 pages ISBN 13 978-1-847198-34-1

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • Oracle WebCenter Portlet Debugging

    - by Alexander Rudat
    IntroductionThis article describes how to debug a portlets that is already deployed to WebLogic server using Oracle JDeveloper 11g.OverviewThese a Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} re the basic steps involved in remote debugging an WebCenter portlets deployed in WebLogic:Configuration of the WebLogic to support remote debuggingConfiguration of the portlet project in JDeveloperActual debugging of the portletConfiguration of the WebLogicTo start the WebLogic server in debugging mode, there are a couple of configuration changes that need to be done to the WebLogic domain where the portlet is deployed.First we need to edit JVM options of the WebLogic server startup script where the portlet is deployed. Normally the startManagedWebLogic.cmd is used to start this managed server.This startup script is located in the %MIDDLEWARE_HOME%\user_projects\domains\<domain_name>\bin  directory, where %MIDDLEWARE_HOME% is the installation directory of WebLogic.Add the following line before the set JAVA_OPTIONS= line:set REMOTE_DEBUG_JAVA_OPTIONS=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=nChange the set JAVA_OPTIONS= line to read like the one below:set JAVA_OPTIONS=%SAVE_JAVA_OPTIONS% %ADF_JAVA_OPTIONS% %REMOTE_DEBUG_JAVA_OPTIONS%After this changes save the startup script and start the managed server and be sure that you have access to the admin console (for example http://localhost:7001/console).Finally we need to check, that HTTP tunneling is enabled on the managed server. To do this, login to the admin console, select the managed server and select the Protocols tab.Be sure that Enable Tunneling is selected.Configuration of the portletFirst let's create a new run configuration specifically for remote debugging. Double-click the project where you portlets are developed.In the Project Properties select the Run/Debug/Profile page. Click New... to create a new run configuration. In the Create Run Configuration  dialog enter Remote Debugging for the name of the run configuration. Leave the Copy Settings From selection to Default and click OK to create the new run configuration.Once the Remote Debug run configuration is created, select it in the Run Configurations and click Edit... to bring up the Edit Run Configuration dialog. In the Launch Settings page click on the Remote Debugging checkbox to enable remote debugging for this run configuration.Finally select the Remote page and verify that the Protocol is set to Attach to JPDA and the port matches the port specified earlier when configuring WebLogic for remote debugging (defaults to 4000).Actual debugging of the portletTo start the remote debugging profile, right-click on your portlet project and select Start Remote Debugger.Now JDeveloper is asking the host and port specification. If you WebLogic server is installed locally, you can apply the default settings: Set a breakpoint at you java code and run the portal (WebCenter) application, where the portlet is used.That's all, now you are able to debug the portlet java code. Hope you will find all errors in your portlet :-)Referenceshttp://www.oracle.com/technology/products/jdev/howtos/weblogic/remotedebugwls.html

    Read the article

  • ASP.NET MVC for the php/asp noob

    - by dotjosh
    I was talking to a friend today, who's foremost a php developer, about his thoughts on Umbraco and he said "Well they're apparently working feverishly on the new version of Umbraco, which will be MVC... which i still don't know what that means, but I know you like it." I ended up giving him a ground up explanation of ASP.NET MVC, so I'm posting this so he can link this to his friends and for anyone else who finds it useful.  The whole goal was to be as simple as possible, not being focused on proper syntax. Model-View-Controller (or MVC) is just a pattern that is used for handling UI interaction with your backend.  In a typical web app, you can imagine the *M*odel as your database model, the *V*iew as your HTML page, and the *C*ontroller as the class inbetween.  MVC handles your web request different than your typical php/asp app.In your php/asp app, your url maps directly to a php/asp file that contains html, mixed with database access code and redirects.In an MVC app, your url route is mapped to a method on a class (the controller).  The body of this method can do some database access and THEN decide which *V*iew (html/aspx page) should be displayed;  putting the controller in charge and not the view... a clear seperation of concerns that provides better reusibility and generally promotes cleaner code. Mysite.com, a quick example:Let's say you hit the following url in your application: http://www.mysite.com/Product/ShowItem?Id=4 To avoid tedious configuration, MVC uses a lot of conventions by default. For instance, the above url in your app would automatically make MVC search for a .net class with the name "Product" and a method named "ShowItem" based on the pattern of the url.  So if you name things properly, your method would automatically be called when you entered the above url.  Additionally, it would automatically map/hydrate the "int id" parameter that was in your querystring, matched by name.Product.cspublic class Product : Controller{    public ViewResult ShowItem(int id)    {        return View();    }} From this point you can write the code in the body of this method to do some database access and then pass a "bag" (also known as the ViewData) of data to your chosen *V*iew (html page) to use for display.  The view(html) ONLY needs to be worried about displaying the flattened data that it's been given in the best way it can;  this allows the view to be reused throughout your application as *just* a view, and not be coupled to HOW the data for that view get's loaded.. Product.cspublic class Product : Controller{    public ViewResult ShowItem(int id)    {        var database = new Database();        var item = database.GetItem(id);        ViewData["TheItem"] = item;        return View();    }} Again by convention, since the class' method name is "ShowItem", it'll search for a view named "ShowItem.aspx" by default, and pass the ViewData bag to it to use. ShowItem.aspx<html>     <body>      <%        var item =(Item)ViewData["TheItem"]       %>       <h1><%= item.FullProductName %></h1>     </body></html> BUT WAIT! WHY DOES MICROSOFT HAVE TO DO THINGS SO DIFFERENTLY!?They aren't... here are some other frameworks you may have heard of that use the same pattern in a their own way: Ruby On Rails Grails Spring MVC Struts Django    

    Read the article

  • Backup and Transfer Foobar2000 to a New Computer

    - by Mysticgeek
    If you are a fan of Foobar2000 you undoubtedly have tweaked it to the point where you don’t want to set it all up again on a new machine. Here we look at how to transfer Foobar2000 settings to a new Windows 7 machine. Note: For this article we are transferring Foobar2000 settings from on Windows 7 machine to another over a network running Windows Home Server.  Foobar2000 Foobar2000 is an awesome music player which is highly customizable and we’ve previously covered. Here we take a look at how it’s set up on the current machine. It’s a nothing flashy, but is set up for our needs and includes a lot of components and playlists.   Backup Files Rather than wasting time setting everything up again on a new machine, we can backup the important files and replace them on the new machine. First type or copy the following into the Explorer address bar. %appdata%\foobar2000 Now copy all of the files in the folder and store them on a network drive or some type removable media or device. New Machine Now you can install the latest version of Foobar2000 on your new machine. You can go with a Standard install as we will be replacing our backed up configuration files anyway. When it launches, it will be set with all the defaults…and we want what we had back. Browse to the following on the new machine… %appdata%\foobar2000 Delete all of the files in this directory… Then replace them with the ones we backed up from the other machine. You’ll also want to navigate to C:\Program Files\Foobar2000 and replace the existing Components folder with the backed up one. When you get the screen telling you there is already files of the same name, select Move and Replace, and check the box Do this for the next 6 conflicts. Now we’re back in business! Everything is exactly as it was on the old machine. In this example, we were moving the Foobar2000 files from a computer on the same home network. All the music is coming from a directory on our Windows Home Server so they hadn’t changed. If you’re moving these files to a computer on another machine… say your work computer, you’ll need to adjust where the music folders point to. Windows XP If you’re setting up Foobar2000 on an XP machine, you can enter the following into the Run line. %appdata%\foobar2000 Then copy your backed up files into the Foobar2000 folder, and remember to swap out the Components folder in C:\Program Files\Foobar2000. Confirm to replace the files and folders by clicking Yes to All… Conclusion This method worked perfectly for us on our home network setup. There might be some other things that will need a bit of tweaking, but overall the process is quick and easy. There is a lot of cool things you can do with Foobar2000 like rip an audio CD to FlAC. If you’re a fan of Foobar2000 or considering switching to it, we will be covering more awesome features in future articles. Download Foobar2000 – Windows Only Similar Articles Productive Geek Tips Backup or Transfer Microsoft Office 2007 Quick Parts Between ComputersBackup and Restore Internet Explorer’s Trusted Sites ListSecond Copy 7 [Review]Backup and Restore Firefox Profiles EasilyFoobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • Do NOT Change "Copy Local” project references to false, unless understand subsequences.

    - by Michael Freidgeim
    To optimize performance of visual studio build I've found multiple recommendations to change CopyLocal property for dependent dlls to false,e.g. From http://stackoverflow.com/questions/690033/best-practices-for-large-solutions-in-visual-studio-2008 CopyLocal? For sure turn this offhttp://stackoverflow.com/questions/280751/what-is-the-best-practice-for-copy-local-and-with-project-referencesAlways set the Copy Local property to false and enforce this via a custom msbuild stephttp://codebetter.com/patricksmacchia/2007/06/20/benefit-from-the-c-and-vb-net-compilers-perf/BenefitBenefitMy advice is to always set ‘Copy Local’ to falseSome time ago we've tried to change the setting to false, and found that it causes problem for deployment of top-level projects.Recently I've followed the suggestion and changed the settings for middle-level projects. It didn't cause immediate issues, but I was warned by Readify Consultant Colin Savage about possible errors during deploymentsI haven't undone the changes immediately and we found a few issues during testing.There are many scenarios, when you need to have Copy Local’ left to True.The concerns are highlighted in some stack overflow answers, but they have small number of votes.Top-level projects:  set copy local = true.First of all, it doesn't work correctly for top-level projects, i.e. executables or web sites.As pointed in the answer http://stackoverflow.com/a/6529461/52277for all the references in the one at the top set copy local = true.Alternatively you have to change output directory as it's described in http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/If you set ‘ Copy Local = false’, VS will, unless you tell it otherwise, place each assembly alone in its own .\bin\Debugdirectory. Because of this, you will need to configure VS to place assemblies together in the same directory. To do so, for each VS project, go to VS > Project Properties > Build tab > Output path, and set the Ouput path to ..\bin\Debugfor debug configuration, and ..\bin\Release for release configuration.Second-level  dependencies:  set copy local = true.Another example when copylocal =false fails on run-time, is when top level assembly doesn't directly referenced one of indirect dependencies.E..g. Top-level assembly A has reference to assembly B with copylocal =true, but assembly B has reference to assembly C with copylocal =false. Most likely assembly C will be missing on runtime and will cause errors E.g. http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1Copy local is important for deployment scenarios and tools. As a general rule you should use CopyLocal=True and http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1 Unfortunately there are some quirks and CopyLocal won't necessary work as expected for assembly references in secondary assemblies structured as shown below.MainApp.exe MyLibrary.dll ThirdPartyLibrary.dll (if in the GAC CopyLocal won't copy to MainApp bin folder)This makes xcopy deployments difficult . .Reflection called DLLs  dependencies:  set copy local = true.E.g user can see error "ISystem.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information."The fix for the issue is recommended in http://stackoverflow.com/a/6200173/52277"I solved this issue by setting the Copy Local attribute of my project's references to true."In general, the problems with investigation of deployment issues may overweight the benefits of reduced build time. Setting the Copy Local to false without considering deployment issues is not a good idea.

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • Use Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)

    - by Trevor Bekolay
    By default, Ubuntu creates a folder called Desktop in your home directory that gets displayed on your desktop. What if you want to use something else, like your Dropbox folder? Here we look at how to use any folder for your desktop. Not only can you change your desktop folder, you can change the location of any other folder Ubuntu creates for you in your home folder, like Documents or Music – and this works in any Linux distribution using the Gnome desktop manager. In this example, we’re going to change desktop to show our Dropbox folder. Open your home folder in a File Browser by clicking on Places > Home Folder. In the Home Folder, open the .config folder. By default, .config is hidden, so you may have to show hidden folders (temporarily) by clicking on View > Show Hidden Files. Then open the .config folder by double-clicking on it. Now open the user-dirs.dirs file… If double-clicking on it does not open it in a text editor, right-click on it and choose Open with Other Application… and find a text editor like Gedit. Change the entry associated with XDG_DESKTOP_DIR to the folder you want to be shown as your desktop. In our case, this is $HOME/Dropbox. Note: The “~” shortcut for the home directory won’t work in this file (use $HOME for that), but an absolute path (i.e. a path starting with “/”) will work. Feel free to change the locations of the other folders as well. Save and close user-dirs.dirs. At this point you can either log off and then log back on to get your desktop back, or open a terminal window Applications > Accessories > Terminal and enter: killall nautilus Nautilus (the file manager in Gnome) will restart itself and display your newly chosen folder as the desktop! This is a cool trick to use any folder for your Ubuntu desktop. What did you use as your desktop folder? Let us know in the comments! Similar Articles Productive Geek Tips Sync Your Pidgin Profile Across Multiple PCs with DropboxAdd "My Dropbox" to Your Windows 7 Start MenuCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesAdd "My Computer" to Your Windows 7 / Vista TaskbarCheck your Disk Usage on Ubuntu with Disk Usage Analyzer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • How to I do install DB2 ODBC?

    - by Justin
    I have been trying, with no success, to install a IBM DB2 ODBC driver so that my PHP server can connect to a database. I've tried installing the db2_connect and get all sorts of problems, I tried install I Access for Linux and the RPM did not install right nor did using alien breed any useful results. I've also tried the DB2 Runtime v8.1, no success. If I attempt to run the rpm it claims I need dependencies that I can't find in apt-get. Yum is also not very helpful as it appears I don't have any repositories installed or lists... Running the simple RPM gives me this result in terminal: # rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... error: Failed dependencies: /bin/ln is needed by iSeriesAccess-7.1.0-1.0.x86_64 /sbin/ldconfig is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/rm is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/sh is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libgcc_s.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbcinst.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbc.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.3.2)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(CXXABI_1.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(GLIBCXX_3.4)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 Using alien and running the dkpg gives me thes headaque: $ alien iSeriesAccess-7.1.0-1.0.x86_64.rpm --scripts # dpkg -i iseriesaccess_7.1.0-2_amd64.deb (Reading database ... 127664 files and directories currently installed.) Preparing to replace iseriesaccess 7.1.0-2 (using iseriesaccess_7.1.0-2_amd64.deb) ... Unpacking replacement iseriesaccess ... post uninstall processing for iSeriesAccess 1.0...upgrade /var/lib/dpkg/info/iseriesaccess.postrm: line 8: [: upgrade: integer expression expected Setting up iseriesaccess (7.1.0-2) ... post install processing for iSeriesAccess 1.0...configure iSeries Access ODBC Driver has been deleted (if it existed at all) because its usage count became zero odbcinst: Driver installed. Usage count increased to 1. Target directory is /etc odbcinst: Driver installed. Usage count increased to 3. Target directory is /etc Processing triggers for libc-bin ... ldconfig deferred processing now taking place So it seems the files installed right, well my odbc driver shows up but db2cli.ini is no where to be found. So several questions. Is there a better alternative to connect php to db2, say an ubuntu package I can just install? Can someone direct me to the steps that makes my ubuntu server works well with the RPM so I can build my db2 instance? Also remember I'm connection to an I Series remotely. I'm not using the DB2 Express C thing, even if I did try it to get the db2 php functions to work. And I don't have zend but I think I have every other package on the ubuntu repositories. Help, thank you!

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Mirroring git and mercurial repos the lazy way

    - by Greg Malcolm
    I maintain Python Koans on mirrored on both Github using git and Bitbucket using mercurial. I get pull requests from both repos but it turns out keeping the two repos in sync is pretty easy. Here is how it's done... Assuming I’m starting again on a clean laptop, first I clone both repos ~/git $ hg clone https://bitbucket.org/gregmalcolm/python_koans ~/git $ git clone [email protected]:gregmalcolm/python_koans.git python_koans2 The only thing that makes a folder a git or mercurial repository is the .hg folder in the root of python_koans and the .git folder in the root of python_koans2. So I just need to move the .git folder over into the python_koans folder I'm using for mercurial: ~/git $ rm -rf python_koans/.git ~/git $ mv python_koans2/.git python_koans ~/git $ ls -la python_koans total 48 drwxr-xr-x 11 greg staff 374 Mar 17 15:10 . drwxr-xr-x 62 greg staff 2108 Mar 17 14:58 .. drwxr-xr-x 12 greg staff 408 Mar 17 14:58 .git -rw-r--r-- 1 greg staff 34 Mar 17 14:54 .gitignore drwxr-xr-x 13 greg staff 442 Mar 17 14:54 .hg -rw-r--r-- 1 greg staff 48 Mar 17 14:54 .hgignore -rw-r--r-- 1 greg staff 365 Mar 17 14:54 Contributor Notes.txt -rw-r--r-- 1 greg staff 1082 Mar 17 14:54 MIT-LICENSE -rw-r--r-- 1 greg staff 5765 Mar 17 14:54 README.txt drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 2 drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 3 That’s about it! Now git and mercurial are tracking files in the same folder. Of course you will still need to set up your .gitignore to ignore mercurial’s dotfiles and .hgignore to ignore git’s dotfiles or there will be squabbling in the backseat. ~/git $ cd python_koans/ ~/git/python_koans $ cat .gitignore *.pyc *.swp .DS_Store answers .hg <-- Ignore mercurial ~/git/python_koans $ cat .hgignore syntax: glob *.pyc *.swp .DS_Store answers .git <-- Ignore git Because both my mirrors are both identical as far as tracked files are concerned I won’t yet see anything if I check statuses at this point: ~/git/python_koans $ git status # On branch master nothing to commit (working directory clean) ~/git/python_koans $ hg status ~/git/python_koans But how about if I accept a pull request from the bitbucket (mercuial) site? ~/git/python_koans $ hg status ~/git/python_koans $ git status # On branch master # Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: python 2/koans/about_decorating_with_classes.py # modified: python 2/koans/about_iteration.py # modified: python 2/koans/about_with_statements.py # modified: python 3/koans/about_decorating_with_classes.py # modified: python 3/koans/about_iteration.py # modified: python 3/koans/about_with_statements.py Mercurial doesn’t have any changes to track right now, but git has changes. Commit and push them up to github and balance is restored to the force: ~/git/python_koans $ git commit -am "Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks'" [master 79ca184] Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks' 6 files changed, 78 insertions(+), 63 deletions(-) ~/git/python_koans $ git push origin master Or just use hg-git? The github developers have actually published a plugin for automatic mirroring: http://hg-git.github.com I haven’t used it because at the time I tried it a couple of years ago I was having problems getting all the parts to play nice with each other. Probably works fine now though..

    Read the article

  • How do I fix a corrupted harddrive after failed upgrade?

    - by Nil
    The problem originated when I was trying to fix this problem. Things went horribly, horribly wrong and I ended up with a new problem altogether. The last thing I did was run sudo apt-get install and that caused my system to freeze. I restarted my computer and it would not boot from the harddrive. I ran a copy of Ubuntu 12.10 from a flashdrive that I had and ran gparted to see if my partitions were all there. It returned this message: Invalid partition table on /dev/sda -- wrong signature 5208. The drive appeared as a 2TiB unallocated drive with an error. The drive had 4 partitions before (plus random unallocated space). There was a fat32 partition, an ext4 partition which contained ubuntu 13.04/13.10 (I don't even know which one at this point), an extended partition which contained a swap partition for my ubuntu partition (I was meaning to move that ubuntu partition into the extended partition, never got around to it), and another partition (I don't remember how I formatted it). I should also mention this is a 1TB harddrive. So in short, I have a corrupted partition table on my primary harddrive from which I boot from, how can I fix this? I tried mounting the drive with sudo mount /dev/sda1 /media/ubuntu then I changed my directory to said folder and tried to list files and this monstrosity happened: $ ls ls: cannot access ??w?j^?.: Input/output error ls: cannot access ??(? ?x?.|: Input/output error ls: cannot access 6W_@?)?._??: Input/output error ls: cannot access HB0v???.A}?: Input/output error ls: cannot access ???.?X: Input/output error ls: cannot access t)?.+?l: Input/output error ls: cannot access ?h@ ?.@ : Input/output error ls: cannot access >? @??.???: Input/output error ls: cannot access m???.??: Input/output error ls: cannot access @ if??a?: Input/output error ls: cannot access ?M!vN$?.??n: Input/output error ls: cannot access ?o? ??.Bm`: Input/output error ls: cannot access ?:I??? M. : Input/output error ls: cannot access W??.??: Input/output error ls: cannot access ?: Input/output error ls: cannot access ?W?s??: Input/output error ls: cannot access ?v?k?.???: Input/output error ls: cannot access 5?$<N??: Input/output error .x????.??i: Input/output error ls: cannot access je????.j?1: Input/output error XjD?.???: Input/output error ls: cannot access W??n???.?: Input/output error ls: cannot access ?^x.$"?: Input/output error ls: cannot access !??*!??j.??: Input/output error ls: cannot access '-??k?^?.???: Input/output error ls: cannot access b?w?w?b.\??: Input/output error ls: cannot access o????"z.??B: Input/output error ls: cannot access ??b?h.?3-: Input/output error ls: cannot access ??.$7: Input/output error ls: cannot access )??K.bk: Input/output error ls: cannot access s??z?.?(?: Input/output error ls: cannot access ?F@?0?.@?: Input/output error .?D: Input/output error .??: Input/output error ls: cannot access?????. @: Input/output error ls: cannot access ?/?? ?.??: No such file or directory ls: cannot access rk?p4q(?.?k: Input/output error This looks promising. This is the output of fdisk -l $ sudo fdisk -l /dev/sda Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x5208 of partition table 5 will be corrected by w(rite) Disk /dev/sda: 2199.0 GB, 2199023132672 bytes 255 heads, 63 sectors/track, 267349 cylinders, total 4294967056 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x44fdfe06 Device Boot Start End Blocks Id System /dev/sda1 113305600 894715903 390705152 c W95 FAT32 (LBA) /dev/sda2 894715904 1489307647 297295872 83 Linux /dev/sda3 1489309694 1497307135 3998721 5 Extended /dev/sda4 1497309184 1953523711 228107264 7 HPFS/NTFS/exFAT /dev/sda5 ? 3013257822 3688738171 337740175 aa Unknown

    Read the article

  • Can't build gcc anymore since upgrade to 11.10

    - by Raphael R.
    On Monday I've upgraded to from Ubuntu 11.04 (my initial installation) to 11.10 and now I can't build gcc from source anymore. Since I forgot to uninstall the gcc package before the upgrade, Ubuntu replaced my 4.7.0 compiler with it's stable 4.6.1. So I tried to build the SVN sources again, but it fails. I've most recently tried it with SVN revision 180193. After some time, the build fails with the following message: /home/raphael/devel/gcc/build/./gcc/xgcc -B/home/raphael/devel/gcc/build/./gcc/ -B/usr/i686-pc-linux-gnu/bin/ -B/usr/i686-pc-linux-gnu/lib/ -isystem /usr/i686-pc-linux-gnu/include -isystem /usr/i686-pc-linux-gnu/sys-include -g -O2 -O2 -I. -I. -I../../src/gcc -I../../src/gcc/. -I../../src/gcc/../include -I../../src/gcc/../libdecnumber -I../../src/gcc/../libdecnumber/bid -I../libdecnumber -I../../src/gcc/../libgcc -g -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -I. -I. -I../.././gcc -I../../../src/libgcc -I../../../src/libgcc/. -I../../../src/libgcc/../gcc -I../../../src/libgcc/../include -I../../../src/libgcc/config/libbid -DENABLE_DECIMAL_BID_FORMAT -DHAVE_CC_TLS -DUSE_TLS -o _ashldi3.o -MT _ashldi3.o -MD -MP -MF _ashldi3.dep -DL_ashldi3 -c ../../../src/libgcc/../gcc/libgcc2.c \ -fvisibility=hidden -DHIDE_EXPORTS In file included from /usr/include/stdio.h:28:0, from ../../../src/libgcc/../gcc/tsystem.h:88, from ../../../src/libgcc/../gcc/libgcc2.c:29: /usr/include/features.h:323:26: fatal error: bits/predefs.h: File or directory not found. I've cofigured it with: ~/devel/gcc/build$ ../src/configure --prefix=/usr --enable-languages=c++ And make it with: ~/devel/gcc/build$ make -j4 Just to be sure, I did a rm -rf * in the build directory in case there's some broken stuff inside. Didn't help, though. That's the backstory. I tried to fix it and searched for the bits/predefs.h. It's inside /usr/include/i386-linux-gnu. I temporarily fixed the problem by doing ~/devel/gcc/build$ C_INCLUDE_PATH=/usr/include/i386-linux-gnu make -j4 Which is only temporary because now gcc complains that it can't find crti.o. Which i can find in /usr/lib/i386-linux-gnu. Now i could also set C_LIBRARY_PATH - actually it doesn't work - but I feel like I'm fighting the system here. Also, even if it succeeds, my newly built compiler would also not know about the i386-linux-gnu stuff. So I would have to set C_LIBRARY_PATH and C_INCLUDE_PATH before every build of every project I have. I could add it to my .bashrc but that subverts the system even more. So, how do I tell the build process: That there are additional include/lib directories, and That it should build a gcc which respects them too? Edit: I forgot to include the command which causes the above error message. Also I can think of another solution: Copy the stuff from /usr/include/i386-linux-gnu to /usr/include (same thing for /usr/lib/i386-linux-gnu to /usr/lib). But that doesn't feel right, either. Finally, the system's gcc 4.6.1 can compile other applications just fine, except mine, which use C++11 features not present in the 4.6 series.

    Read the article

  • Apache virtual host does not work properly

    - by Jori
    I have read a lot of information all over the Internet regarding this subject, and can not figure out what I'am doing wrong. I'm trying to host two websites under different names locally under Windows 7 with Apaches Virtual Hosting functionality. This is what I have done already: In the httpd.conf file I uncommented the following line, so that the virtual host configuration file will be included in the main configuration sequence. # Virtual hosts Include conf/extra/httpd-vhosts.conf This is how I edited my httpd-vhosts.conf: # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host.localhost" # ServerName dummy-host.localhost # ServerAlias www.dummy-host.localhost # ErrorLog "logs/dummy-host.localhost-error.log" # CustomLog "logs/dummy-host.localhost-access.log" common #</VirtualHost> # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host2.localhost" # ServerName dummy-host2.localhost # ErrorLog "logs/dummy-host2.localhost-error.log" # CustomLog "logs/dummy-host2.localhost-access.log" common #</VirtualHost> <VirtualHost *:80> ServerName arterieur DocumentRoot "J:/webcontent/www20" <Directory "J:/webcontent/www20"> Order allow,deny Allow from all </Directory> </VirtualHost> As you can see I commented the Virtual Host examples out and added my own one (I did one for this example). Also am I sure that J:\webcontent\www20 exists. At last I edited the Windows host file located in: C:\Windows\System32\drivers\etc\hosts, now it looks this: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 arterieur Then I restarted Apache with the Apache Service Monitor, and it gave me the following fatal error: The requested operation has failed!, I tried to look at the apache/logs/error.log file but I did not log anything, I guess it only logs the errors after startup. Does anyone knows what I'am doing wrong?

    Read the article

  • Fake It Easy On Yourself

    - by Lee Brandt
    I have been using Rhino.Mocks pretty much since I started being a mockist-type tester. I have been very happy with it for the most part, but a year or so ago, I got a glimpse of some tests using Moq. I thought the little bit I saw was very compelling. For a long time, I had been using: 1: var _repository = MockRepository.GenerateMock<IRepository>(); 2: _repository.Expect(repo=>repo.SomeCall()).Return(SomeValue); 3: var _controller = new SomeKindaController(_repository); 4:  5: ... some exercising code 6: _repository.AssertWasCalled(repo => repo.SomeCall()); I was happy with that syntax. I didn’t go looking for something else, but what I saw was: 1: var _repository = new Mock(); And I thought, “That looks really nice!” The code was very expressive and easier to read that the Rhino.Mocks syntax. I have gotten so used to the Rhino.Mocks syntax that it made complete sense to me, but to developers I was mentoring in mocking, it was sometimes to obtuse. SO I thought I would write some tests using Moq as my mocking tool. But I discovered something ugly once I got into it. The way Mocks are created makes Moq very easy to read, but that only gives you a Mock not the object itself, which is what you’ll need to pass to the exercising code. So this is what it ends up looking like: 1: var _repository = new Mock<IRepository>(); 2: _repository.SetUp(repo=>repo.SomeCall).Returns(SomeValue); 3: var _controller = new SomeKindaController(_repository.Object); 4: .. some exercizing code 5: _repository.Verify(repo => repo.SomeCall()); Two things jump out at me: 1) when I set up my mocked calls, do I set it on the Mock or the Mock’s “object”? and 2) What am I verifying on SomeCall? Just that it was called? that it is available to call? Dealing with 2 objects, a “Mock” and an “Object” made me have to consider naming conventions. Should I always call the mock _repositoryMock and the object _repository? So I went back to Rhino.Mocks. It is the most widely used framework, and show other how to use it is easier because there is one natural object to use, the _repository. Then I came across a blog post from Patrik Hägne, and that led me to a post about FakeItEasy. I went to the Google Code site and when I saw the syntax, I got very excited. Then I read the wiki page where Patrik stated why he wrote FakeItEasy, and it mirrored my own experience. So I began to play with it a bit. So far, I am sold. the syntax is VERY easy to read and the fluent interface is super discoverable. It basically looks like this: 1: var _repository = A.Fake<IRepository>(); 2: a.CallTo(repo=>repo.SomeMethod()).Returns(SomeValue); 3: var _controller = new SomeKindaController(_repository); 4: ... some exercising code 5: A.CallTo(() => _repository.SOmeMethod()).MustHaveHappened(); Very nice. But is it mature? It’s only been around a couple of years, so will I be giving up some thing that I use a lot because it hasn’t been implemented yet? I doesn’t seem so. As I read more examples and posts from Patrik, he has some pretty complex scenarios. He even has support for VB.NET! So if you are looking for a mocking framework that looks and feels very natural, try out FakeItEasy!

    Read the article

  • mythbuntu 12 - lirc device doesn't appear to even exist

    - by FrustratedWithFormsDesigner
    I'm trying to get a new installation of Mythbuntu working. So far, everything is OK except the remote. The sensor for the remote is on my Hauppauge WinTV HVR 1250. First I tried to run irw to see what was being picked up by the sensor: $ irw connect: No such file or directory Then trying to run lircd gives: $ lircd start$ lircd start lircd: can't open or create /var/run/lirc/lircd.pid I look for any lirc devices and find there are none: $ ls /dev/li* ls: cannot access /dev/li*: No such file or directory Just to be sure, I check in /proc/bus/input/devices, which shows me two powerbuttons (not sure why), kbd and mouse dev, and the audio devs. Nothing for the IR receiver on the tuner card (which I thought was strange because shouldn't the tuner show up here?). $ cat /proc/bus/input/devices I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=PNP0C0C/button/input0 S: Sysfs=/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 U: Uniq= H: Handlers=kbd event0 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=LNXPWRBN/button/input0 S: Sysfs=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 U: Uniq= H: Handlers=kbd event1 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input0 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.0/input/input2 U: Uniq= H: Handlers=sysrq kbd event2 B: PROP=0 B: EV=120013 B: KEY=1000000000007 ff9f207ac14057ff febeffdfffefffff fffffffffffffffe B: MSC=10 B: LED=7 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input1 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.1/input/input3 U: Uniq= H: Handlers=kbd mouse0 event3 B: PROP=0 B: EV=1f B: KEY=4837fff072ff32d bf54444600000000 70001 20c100b17c000 267bfad9415fed 9e168000004400 10000002 B: REL=143 B: ABS=100000000 B: MSC=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input4 U: Uniq= H: Handlers=event4 B: PROP=0 B: EV=21 B: SW=2000 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input5 U: Uniq= H: Handlers=event5 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Rear Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input6 U: Uniq= H: Handlers=event6 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Headphone" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input7 U: Uniq= H: Handlers=event7 B: PROP=0 B: EV=21 B: SW=4 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line-Out" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input8 U: Uniq= H: Handlers=event8 B: PROP=0 B: EV=21 B: SW=40 According to dmesg, the driver was registered, but it doesn't look like any devices was associated with the driver: $ dmesg | grep irc [ 10.631162] lirc_dev: IR Remote Control driver registered, major 249 So far, I've seen a number of forum pages suggesting that I use some trick to create a link between /dev/lirc and some other device that is the REAL IR sensor, like /dev/event5, but those cases assume that the real device is shown from /proc/bus/input/devices, and I don't see any such device there. Any suggestions on how to fix or further diagnose this?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >