Search Results

Search found 44956 results on 1799 pages for 'type checking'.

Page 303/1799 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • How to Find Out Which Devices Are Supported By Solaris 11

    - by rickramsey
    Image of monks gathering on the steps of the main hall in the Tashilhunpo Monastery is courtesy of Alison Whitear Travel Photography. In his update of Brian Leonard's original Taking Your First Steps With Oracle Solaris, Glynn Foster walks you through the most basic steps required to get a version of Oracle Solaris 11 operational: Installing Solaris (VirtualBox, bare metal, or multi-boot) Managing users (root role, sudo command) Managing services with SMF (svcs and svcadm) Connecting to the network (with SMF or manually via dladm and ipadm) Figuring out the directory structure Updating software (with the IPS GUI or the pkg command) Managing package repositories Creating and managing additional boot environments One of the things you'll have to consider as you install Solaris 11 on an x86 system is whether Solaris has the proper drivers for the devices on your system. In the section titled "Installing On Bare Metal as a Standalone System," Glynn shows you how to use the Device Driver utility that's included with the Graphical Installer. However, if you want to get that information before you start installing Solaris 11 on your x86 system, you can consult the x86 Device List that's part of the Oracle Solaris Hardware Compatibility List (HCL). Here's how: Open the Device List. Scroll down to the table. Open the "Select Release" pull-down menu and pick "Solaris 11 11/11." Move over to the "Select Device Type" pull-down menu, and pick the device type. Or "All." The table will list all the devices of that type that are supported by Solaris 11, including PCI ID and vendor. In the coming days the Solaris Hardware Compatibility List will be updated with more Solaris 11 content. Stay tuned. - Rick Ramsey Website Newsletter Facebook Twitter

    Read the article

  • Is there a way communicate or measure levels of abstraction?

    - by hydroparadise
    I'll be the first to say that this question is a bit... out there. But here are a couple questions I bear in mind : Is abstraction continuous or discrete? Is there a single unit of abstraction? But I'm not sure those questions are truly answerable or even really makes sence. My naive answer would be something along the lines of abitrarily discrete but not necescarily having a single unit measure. Here's what I mean... Take a Black Labrador; an abstraction that could be made is that a Black Lab is a type of animal. [Animal]<--[Black Lab] A Black Lab is also a type of Dog. [Dog]<--[Black Lab] One way to establish a degree of abstraction is by comparing the two the abstractions. We could say that [Animal] is more abstract than [Dog] in respect to a Black Lab. It just so happens [Animal] can also be used as an abstraction of [Dog] So, we might end up with something like [Animal]<--[Dog]<--[Black Lab] With the model above, one might be inclined to say that there's two hops of abstraction to get from [Black Lab] to [Animal]. But you can't exactly tell somebody they need one level abstraction and reasonalby expect they will come up with [Dog] given they aren't explicity given the options above. If I needed to tell someobody in a single email that they needed an abstract class with out knowing what that abstract class is, is there a way to communaticate a degree of abstraction such that they might end up on Dog instead of Animal? As a side note, what area of study might this type of analysis fall under?

    Read the article

  • Correct For Loop Design

    - by Yttrill
    What is the correct design for a for loop? Felix currently uses if len a > 0 do for var i in 0 upto len a - 1 do println a.[i]; done done which is inclusive of the upper bound. This is necessary to support the full range of values of a typical integer type. However the for loop shown does not support zero length arrays, hence the special test, nor will the subtraction of 1 work convincingly if the length of the array is equal to the number of integers. (I say convincingly because it may be that 0 - 1 = maxval: this is true in C for unsigned int, but are you sure it is true for unsigned char without thinking carefully about integral promotions?) The actual implementation of the for loop by my compiler does correctly handle 0 but this requires two tests to implement the loop: continue: if not (i <= bound) goto break body if i == bound goto break ++i goto continue break: Throw in the hand coded zero check in the array example and three tests are needed. If the loop were exclusive it would handle zero properly, avoiding the special test, but there'd be no way to express the upper bound of an array with maximum size. Note the C way of doing this: for(i=0; predicate(i); increment(i)) has the same problem. The predicate is tested after the increment, but the terminating increment is not universally valid! There is a general argument that a simple exclusive loop is enough: promote the index to a large type to prevent overflow, and assume no one will ever loop to the maximum value of this type.. but I'm not entirely convinced: if you promoted to C's size_t and looped from the second largest value to the largest you'd get an infinite loop!

    Read the article

  • Javascript autotab function not working on iPad or iPhone [migrated]

    - by freddy6
    I have this this piece of html code: <form name="postcode" method="post" onsubmit="return OnSubmitForm();"> <input class="postcode" maxlength="1" size="1" name="c" onKeyup="autotab(this, document.postcode.o)" /> <input class="postcode" maxlength="1" size="1" name="o" onKeyup="autotab(this, document.postcode.d)" /> <input class="postcode" maxlength="1" size="1" name="d" onKeyup="autotab(this, document.postcode.e)" /> <input class="postcode" maxlength="1" size="1" name="e" /> <br /> </form> which uses this javascript: <script> /* Auto tabbing script- By JavaScriptKit.com http://www.javascriptkit.com This credit MUST stay intact for use */ function autotab(original,destination){ if (original.getAttribute&&original.value.length==original.getAttribute("maxlength")) destination.focus() } </script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script src="js/scripts.js" type="text/javascript"></script> <script type="text/javascript"> function OnSubmitForm() { if(document.postcode.operation[0].checked == true) { document.postcode.action ="plans.php"; } else if(document.postcode.operation[1].checked == true) { document.postcode.action ="plans_gas.php"; } else if(document.postcode.operation[2].checked == true) { document.postcode.action ="plans_duel.php"; } return true; } </script> As soon a you enter in one character into one of the text boxes it automatically tabs across the the next text box. This works fine on a pc or mac and on safari and also in all other browsers. But when viewing the webpage on an iPad or iPhone (using safari) the auto tabbing function does not work. Any ideas on how to make the auto tab work on these mobile devices?

    Read the article

  • How to reload google dfp in ajax content? [migrated]

    - by cj333
    google dfp support ads in ajax content, but if I parse all the code in main page. it always show the same ads even turn page reload the ajax content. I read some article from http://productforums.google.com/forum/#!msg/dfp/7MxNjJk46DQ/4SAhMkh2RU4J. But my code not work. Any work code for suggestion? Thanks. Main page code: <script type='text/javascript'> $(document).ready(function(){ $('#next').live('click',function(){ var num = $(this).html(); $.ajax({ url: "album-slider.php", dataType: "html", type: 'POST', data: 'photo=' + num, success: function(data){ $("#slider").center(); googletag.cmd.push(function() { googletag.defineSlot('/1*******/ads-728-90', [728, 90], 'div-gpt-ad-1**********-'+ num).addService(googletag.pubads()); googletag.pubads().enableSingleRequest(); googletag.enableServices(); }); } }); }); }); </script> album-slider.php <!-- ads-728-90 --> <div id='div-gpt-ad-1**********-<?php echo $_GET['photo']; ?>' style='width:728px; height:90px;'> <script type='text/javascript'> googletag.cmd.push(function() { googletag.display('div-gpt-ad-1**********-<?php echo $_GET['photo']; ?>); }); </script>

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Contents farms, scrapers sites, aggregators real world examples? [closed]

    - by Marco Demaio
    Contents farm, scrappers, aggregators real world examples? Could you plz clarify me: efreedom.com is a scraper site, not a content farm? Because it simply copies and pastes contents from stackoverflow. ehow.com and squidoo.com are contents farm? They don't copy and paste contents they just generate fresh new user generated content, but too much and too quickly. expert-exchange.com is NOT a content farm or a scraper site, right?! It's simply that many people (an me too) hates it (they also wrote to Matt Cutts) because it shows up hight in Google providing a useless question with no answer. There are also many sites that act as 'contents aggregators in the form of specialized directories' (let's call them CASD), I don't know how to else define them. Do they have a specific definition? Anyway are these type of CASD contents farms or scrapers sites or what else? Basically these CASD search for all sites of the same type i.e. “restaurants websites”, they copy and paste the contents found in “Restaurant A” and create in their aggregator site a new page called “Restaurant A”, then they do the same for all websites of the same type, thus creating a sort of directory of restaurants. Later on these CASD also sends an email to the owner of “Restaurant A” (usually the email is on the website) with a user and password to let him modify/update its own page on the CASD site. Later on these CASD might ask for money to the owner of “Restaurant A” because they bring him traffic, otherwise they remove its page on the aggregator. Someone could call these simply directories, but I think a directory is different because is something you need to add your site into by filling a form and not something that steals contents from your existing site without a specific acceptance from the site's owner. I also really wonder how Google will sort out all these mess sites packed of contents that show up more and more and everywhere in search results.

    Read the article

  • How to use Nintex Reusable Workflow Template

    - by ybbest
    If you like to re-use your workflow logic over more than one list or library, you can create reusable workflow template. Here are the steps 1. Go to site settings and create reusable workflow template. 2. Select the content type you like the template to bound to and give a workflow a title. 3.Create your workflow the same way as you did for a list workflow and publish your workfow. 4. Finally, you need add your workflow to the list you like to run your workflow. 5. Go to workflow settings and add a Workflow. 6. Select the content type and configure the workflow as below 7. After you done this, your workflow will run as usual. Note: 1. You cannot conditionally start your workflow. 2. Your workflow is not automatically bound to the list when you add the content type to the list, you need to configure it manually as shown in step 4-6.

    Read the article

  • What is the structure of network managers system-connections files?

    - by Oyks Livede
    could anyone list the complete structure of the configuration files, which network manager stores for known networks in /etc/NetworkManager/system-connections for known networks? Sample (filename askUbuntu): [connection] id=askUbuntu uuid=81255b2e-bdf1-4bdb-b6f5-b94ef16550cd type=802-11-wireless [802-11-wireless] ssid=askUbuntu mode=infrastructure mac-address=00:08:CA:E6:76:D8 [ipv6] method=auto [ipv4] method=auto I would like to create some of them by my own using a script. However, before doing so I would like to know every possible option. Furthermore, this structure seems somehow to resemble the information you can get using the dbus for active connections. dbus-send --system --print-reply \ --dest=org.freedesktop.NetworkManager \ "$active_setting_path" \ # /org/freedesktop/NetworkManager/Settings/2 org.freedesktop.NetworkManager.Settings.Connection.GetSettings Will tell you: array [ dict entry( string "802-11-wireless" array [ dict entry( string "ssid" variant array of bytes "askUbuntu" ) dict entry( string "mode" variant string "infrastructure" ) dict entry( string "mac-address" variant array of bytes [ 00 08 ca e6 76 d8 ] ) dict entry( string "seen-bssids" variant array [ string "02:1A:11:F8:C5:64" string "02:1A:11:FD:1F:EA" ] ) ] ) dict entry( string "connection" array [ dict entry( string "id" variant string "askUbuntu" ) dict entry( string "uuid" variant string "81255b2e-bdf1-4bdb-b6f5-b94ef16550cd" ) dict entry( string "timestamp" variant uint64 1383146668 ) dict entry( string "type" variant string "802-11-wireless" ) ] ) dict entry( string "ipv4" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) dict entry( string "ipv6" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) ] I can create new setting files using the dbus (AddSettings() in /org/freedesktop/NetworkManager/Settings) passing this type of input, so explaining me this structure and telling me all possible options will also help. Afaik, this is a Dictionary{String, Dictionary{String, Variant}}. Will there be any difference creating config files directly or using the dbus?

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • What is up with the Joy of Clojure 2nd edition?

    - by kurofune
    Manning just released the second edition of the beloved Joy of Clojure book, and while I share that love I get the feeling that many of the examples are already outdated. In particular, in the chapter on optimization the recommended type-hinting seems not to be allowed by the compiler. I don't know if this was allowable for older versions of Clojure. For example: (defn factorial-f [^long original-x] (loop [x original-x, acc 1] (if (>= 1 x) acc (recur (dec x) (*' x acc))))) returns: clojure.lang.Compiler$CompilerException: java.lang.UnsupportedOperationException: Can't type hint a primitive local, compiling:(null:3:1) Likewise, the chapter on core.logic seems be using an old API and I have to find workarounds for each example to accommodate the recent changes. For example, I had to turn this: (logic/defrel orbits orbital body) (logic/fact orbits :mercury :sun) (logic/fact orbits :venus :sun) (logic/fact orbits :earth :sun) (logic/fact orbits :mars :sun) (logic/fact orbits :jupiter :sun) (logic/fact orbits :saturn :sun) (logic/fact orbits :uranus :sun) (logic/fact orbits :neptune :sun) (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital))) into this, leveraging the pldb lib: (pldb/db-rel orbits orbital body) (def facts (pldb/db [orbits :mercury :sun] [orbits :venus :sun] [orbits :earth :sun] [orbits :mars :sun] [orbits :jupiter :sun] [orbits :saturn :sun] [orbits :uranus :sun] [orbits :neptune :sun])) (pldb/with-db facts (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital)))) I am still pulling teeth to get the later examples to work. I am relatively new programming, myself, so I wonder if I am naively looking over something here, or are if these points I'm making legitimate concerns? I really want to get good at this stuff like type-hinting and core.logic, but wanna make sure I am studying up to date materials. Any illuminating facts to help clear up my confusion would be most welcome.

    Read the article

  • Scheduled Deprecation of Legacy Obligation Features

    - by Wes Curtis
    The Obligation object in ETPM includes some functionality and tables that, to our knowledge, are not being used by customers and implementers are this time.  Removing this logic and the related tables should benefit the performance of and simplify logic executed during Obligation maintenance processing. The Release Notes included with ETPM v2.3.1 announced that the product plans to deprecate the functionality on Obligation for Contract Terms, Contract Quantities, Tax Exemptions, Terms & Conditions and Obligation Type Start Options.  Our plan is to remove this functionality in the next release of ETPM. We have already confirmed with most project teams that these features are not being used so the deprecation should have no impact on existing designs or process. If you think your project may be impacted by this deprecation, please review any Business Object that has been created for the Obligation maintenance object to make sure that no elements are being defined for any of the following child tables: -          CI_SA_CONTERM -          CI_SA_CONT_QTY -          CI_TOU_CONT_VAL -          CI_SA_TC   As part of this deprecation, the following administrative tables are being removed along with their related metadata: -          Contract Quantity Type -          Tax Exempt Type -          Terms and Conditions Please contact myself or the Oracle Tax Product Management team if your implementation has actually used these objects in their designs. We can discuss options to mitigate impacts of this planned deprecation.  We will continue to announce planned deprecations in the Release Notes for each release and will contact project teams ahead of time to confirm that these deprecations will have little to no impact on our customers.

    Read the article

  • How to build MVC Views that work with polymorphic domain model design?

    - by Johann de Swardt
    This is more of a "how would you do it" type of question. The application I'm working on is an ASP.NET MVC4 app using Razor syntax. I've got a nice domain model which has a few polymorphic classes, awesome to work with in the code, but I have a few questions regarding the MVC front-end. Views are easy to build for normal classes, but when it comes to the polymorphic ones I'm stuck on deciding how to implement them. The one (ugly) option is to build a page which handles the base type (eg. IContract) and has a bunch of if statements to check if we passed in a IServiceContract or ISupplyContract instance. Not pretty and very nasty to maintain. The other option is to build a view for each of these IContract child classes, breaking DRY principles completely. Don't like doing this for obvious reasons. Another option (also not great) is to split the view into chunks with partials and build partial views for each of the child types that are loaded into the main view for the base type, then deciding to show or hide the partial in a single if statement in the partial. Also messy. I've also been thinking about building a master page with sections for the fields that only occur in subclasses and to build views for each subclass referencing the master page. This looks like the least problematic solution? It will allow for fairly simple maintenance and it doesn't involve code duplication. What are your thoughts? Am I missing something obvious that will make our lives easier? Suggestions?

    Read the article

  • Why does void in C mean not void?

    - by Naftuli Tzvi Kay
    In strongly-typed languages like Java and C#, void (or Void) as a return type for a method seem to mean: This method doesn't return anything. Nothing. No return. You will not receive anything from this method. What's really strange is that in C, void as a return type or even as a method parameter type means: It could really be anything. You'd have to read the source code to find out. Good luck. If it's a pointer, you should really know what you're doing. Consider the following examples in C: void describe(void *thing) { Object *obj = thing; printf("%s.\n", obj->description); } void *move(void *location, Direction direction) { void *next = NULL; // logic! return next; } Obviously, the second method returns a pointer, which by definition could be anything. Since C is older than Java and C#, why did these languages adopt void as meaning "nothing" while C used it as "nothing or anything (when a pointer)"?

    Read the article

  • Semantic Form Markup for Yes or No Questions - Or Should I Tell my Designers to Bugger Off?

    - by sholsinger
    I frequently receive mock-ups of HTML forms with the following prototype: Some long winded yes or no question?   (o) Yes   ( ) No The (o) and ( ) in this prototype represent radio buttons. My personal view is that if the question has only a true or false value then it should be a check box. That said, I have seen this sort of "layout" from almost every designer I've ever worked with. If I were not to question their decision, or question the client's decision, I'd probably mark it up like this: <p class="pseudo_label">Some long winded yes or no question?</p> <input type="radio" name="the_question" id="the_question_yes" value="1"> <label for="the_question_yes" class="after_radio">Yes</label> <input type="radio" name="the_question" id="the_question_no" value="0"> <label for="the_question_no" class="after_radio">No</label> I really don't want to do that. I want to push back and convince them that this should really be a check box and not two radio buttons. But my question is, if I can't convince them – you're welcome to help me try – how should I code that original design requirement such that it is semantic and at least understandable for screen reader users? If I were able to convince my tormentors to change their minds, I would likely code it in the following fashion: <label for="the_question">Some long winded yes or no question?</label> <input type="checkbox" name="the_question" id="the_question" value="1"> What do you think about this issue? Should I push back? Possibly more importantly is either way semantically correct?

    Read the article

  • connecting to freenx server : configuration error

    - by Sandeep
    I am not able to pin point what is missing. I have configured freenx-server (useradd, passwd etc). However, server drops the connection after authentication. Please note my server is ubuntu 10.04 and client recent version. Below is the error log NX> 203 NXSSH running with pid: 6016 NX> 285 Enabling check on switch command NX> 285 Enabling skip of SSH config files NX> 285 Setting the preferred NX options NX> 200 Connected to address: 192.168.2.2 on port: 22 NX> 202 Authenticating user: nx NX> 208 Using auth method: publickey HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0) NX> 105 hello NXCLIENT - Version 3.2.0 NX> 134 Accepted protocol: 3.2.0 NX> 105 SET SHELL_MODE SHELL NX> 105 SET AUTH_MODE PASSWORD NX> 105 login NX> 101 User: sandeep NX> 102 Password: NX> 103 Welcome to: ubuntu user: sandeep NX> 105 listsession --user="sandeep" --status="suspended,running" --geometry="1366x768x24+render" --type="unix-gnome" NX> 127 Sessions list of user 'sandeep' for reconnect: Display Type Session ID Options Depth Screen Status Session Name ------- ---------------- -------------------------------- -------- ----- -------------- ----------- ------------------------------ NX> 148 Server capacity: not reached for user: sandeep NX> 105 startsession --link="lan" --backingstore="1" --encryption="1" --cache="16M" --images="64M" --shmem="1" --shpix="1" --strict="0" --composite="1" --media="0" --session="t" --type="unix-gnome" --geometry="1366x768" --client="linux" --keyboard="pc105/us" --screeninfo="1366x768x24+render" ssh_exchange_identification: Connection closed by remote host NX> 280 Exiting on signal: 15

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • Ubuntu 12.04 problem with E160 huawei - can't detect the device and freezing system

    - by Matt
    I have just installed 12.04 and plugged in E160 and nothing happened - modem doesn't mount. I have found this solution : Ubuntu does not mount some Huawei devices due to bugs, problems etc. See if these work: 1st option: Connect the USB modem. After 10 seconds, type this in a terminal window: lsusb The output will be like this: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 004: ID 12d1:140b Huawei Technologies Co., Ltd. Bus 004 Device 002: ID 413c:3016 Dell Computer Corp. Optical 5-Button Wheel Mouse Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 005: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 002 Device 004: ID 413c:8103 Dell Computer Corp. Wireless 350 Bluetooth Bus 002 Device 003: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The device is a Huawei modem, so let's look at the output. The relevant entry is: Bus 004 Device 004: ID 12d1:140b Huawei Technologies Co., Ltd Hence, you must type: sudo modprobe usbserial vendor=0x12d1 product=0x140b 2nd option Download usb-modeswitch and usb-modeswitch-data packages from packages.ubuntu.com. Install them through the command: sudo dpkg -i usb-modeswitch*.deb 3rd option Try a combination of both. but with no result. The modem is still not detected. I've tried to add a new connection but the system can't see my device in setup dialogue. Also I have noticed that when I open eg. terminal and try to type sth, the system freezes for a while.. Thanks for help!

    Read the article

  • How do I connect to my running VM via virsh?

    - by Avery Chan
    My VM has already been started via virsh start chameleon.ootbdev. When I do a virsh console chameleon.ootbdev I get the following output: Connected to domain chameleon.ootbdev Escape character is ^] error: internal error cannot find character device (null) Doing a google search on this led me to this "solution". Unfortunately, editing the domain via virsh edit chameleon.ootbdev doesn't seem to stick. I suspect the issue is that I'm inserting the XML incorrectly: the instructions from the link ask me to insert the following XML into the domain XML file. <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> I've posted my domain XML file to pastebin here. This is AFTER I've tried to insert the above XML. I inserted this XML after the </devices> block. My primary question is: How do I connect to the running VM? A secondary question would be: How do I edit the domain file with the above XML and get the changes to stick?

    Read the article

  • Hidden web standards behind Google "custom searchEngines"?

    - by Hoàng Long
    Today while playing with Google Chrome Omnibox, I notice a strange behavior. I guess there's some "hidden" web standard behind it, but can't figure it out. Here's how to reproduce: Go to http://edition.cnn.com/ Use the search function at the higher right corner, Search a random keyword, for example: "abc" Close the tabs. Open a new tab, type until Chrome reminds you about http://edition.cnn.com/, then press "Tab" The Omnibox now shows "Search CNN.com"! And when you type "abc" and press Enter, it uses the CNN search function to do the job, not Google! I also tried it for several different sites. To some it won't work. But to some sites, like CNN, vnexpress.net, it works after I use the search function of that site once. I also learnt about chrome://settings/searchEngines (type it in your chrome box and you will see), and learnt about you can add custom search engine in chrome. But the question is, why Chrome can realize the search URL automatically to some pages, and not others? It's not because some site subscribe to Google service, because I can do the same method for my site (http://ledohoanglong.wordpress.com), and I'm sure that there's no subscription. So I guess there's a method to "expose" the search function of a site, so that Google Chrome can catch it (after I call the search function of that site once, of courses). Does anyone know about how it works behind the scene?

    Read the article

  • Using the OAM Mobile & Social SDK to secure native mobile apps - Part 2 : OAM Mobile & Social Server configuration

    - by kanishkmahajan
    Objective  In the second part of this blog post I'll now cover configuration of OAM to secure our sample native apps developed using the iOS SDK. First, here are some key server side concepts: Application Profiles: An application profile is a logical representation of your application within OAM server. It could be a web (html/javascript) or native (iOS or Android) application. Applications may have different requirements for AuthN/AuthZ, and therefore each application that interacts with OAM Mobile & Social REST services must be uniquely defined. Service Providers: Service providers represent the back end services that are accessed by applications. With OAM Mobile & Social these services are in the areas of authentication, authorization and user profile access. A Service Provider then defines a type or class of service for authentication, authorization or user profiles. For example, the JWTAuthentication provider performs authentication and returns JWT (JSON Web Tokens) to the application. In contrast, the OAMAuthentication also provides authentication but uses OAM SSO tokens Service Profiles:  A Service Profile is a logical envelope that defines a service endpoint URL for a service provider for the OAM Mobile & Social Service. You can create multiple service profiles for a service provider to define token capabilities and service endpoints. Each service provider instance requires atleast one corresponding service profile.The  OAM Mobile & Social Service includes a pre-configured service profile for each pre-configured service provider. Service Domains: Service domains bind together application profiles and service profiles with an optional security handler. So now let's configure the OAM server. Additional details are in the OAM Documentation and this post simply provides an outline of configuration tasks required to configure OAM for securing native apps.  Configuration  Create The Application Profile Log on to the Oracle Access Management console and from System Configuration -> Mobile and Social -> Mobile Services, select "Create" under Application Profiles. You would do this  step twice - once for each of the native apps - AvitekInventory and AvitekScheduler. Enter the parameters for the new Application profile: Name:  The application name. In this example we use 'InventoryApp' for the AvitekInventory app and 'SchedulerApp' for the AvitekScheduler app. The application name configured here must match the application name in the settings for the deployed iOS application. BaseSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM server.  Mobile Configuration: Enable this checkbox for any mobile applications. This enables the SDK to collect and send Mobile specific attributes to the OAM server.  Webview: Controls the type of browser that the iOS application will use. The embedded browser (default) will render the browser within the application. External will use the system standalone browser. External can sometimes be preferable for debugging URLScheme: The URL scheme associated with the iOS apps that is also used as a custom URL scheme to register O/S handlers that will take control when OAM transfers control to device. For the AvitekInventory and the AvitekScheduler apps I used osa:// and client:// respectively. You set this scheme in Xcode while developing your iOS Apps under Info->URL Types.  Bundle Identifier : The fully qualified name of your iOS application. You typically set this when you create a new Xcode project or under General->Identity in Xcode. For the AvitekInventory and AvitekScheduler apps these were com.us.oracle.AvitekInventory and com.us.oracle.AvitekScheduler respectively.  Create The Service Domain Select create under Service domains. Create a name for your domain (AvitekDomain is what I've used). The name configured must match the service domain set in the iOS application settings. Under "Application Profile Selection" click the browse button. Choose the application profiles that you created in the previous step one by one. Set the InventoryApp as the SSO agent (with an automatic priority of 1) and the SchedulerApp as the SSO client. This associates these applications with this service domain and configures them in a 'circle of trust'.  Advance to the next page of the wizard to configure the services for this domain. For this example we will use the following services:  Authentication:   This will use the JWT (JSON Web Token) format authentication provider. The iOS application upon successful authentication will receive a signed JWT token from OAM Mobile & Social service. This token will be used in subsequent calls to OAM. Use 'MobileOAMAuthentication' here. Authorization:  The authorization provider. The SDK makes calls to this provider endpoint to obtain authorization decisions on resource requests. Use 'OAMAuthorization' here. User Profile Service:  This is the service that provides user profile services (attribute lookup, attribute modification). It can be any directory configured as a data source in OAM.  And that's it! We're done configuring our native apps. In the next section, let's look at some additional features that were mentioned in the earlier post that are automated by the SDK for the app developer i.e. these are areas that require no additional coding by the app developer when developing with the SDK as they only require server side configuration: Additional Configuration  Offline Authentication Select this option in the service domain configuration to allow users to log in and authenticate to the application locally. Clear the box to block users from authenticating locally. Strong Authentication By simply selecting the OAAMSecurityHandlerPlugin while configuring mobile related Service Domains, the OAM Mobile&Social service allows sophisticated device and client application registration logic as well as the advanced risk and fraud analysis logic found in OAAM to be applied to mobile authentication. Let's look at some scenarios where the OAAMSecurityHandlerPlugin gets used. First, when we configure OAM and OAAM to integrate together using the TAP scheme, then that integration kicks off by selecting the OAAMSecurityHandlerPlugin in the mobile service domain. This is how the mobile device is now prompted for KBA,OTP etc depending on the TAP scheme integration and the OAM users registered in the OAAM database. Second, when we configured the service domain, there were claim attributes there that are already pre-configured in OAM Mobile&Social service and we simply accepted the default values- these are the set of attributes that will be fetched from the device and passed to the server during registration/authentication as device profile attributes. When a mobile application requests a token through the Mobile Client SDK, the SDK logic will send the Device Profile attributes as a part of an HTTP request. This set of Device Profile attributes enhances security by creating an audit trail for devices that assists device identification. When the OAAM Security Plug-in is used, a particular combination of Device Profile attribute values is treated as a device finger print, known as the Digital Finger Print in the OAAM Administration Console. Each finger print is assigned a unique fingerprint number. Each OAAM session is associated with a finger print and the finger print makes it possible to log (and audit) the devices that are performing authentication and token acquisition. Finally, if the jail broken option is selected while configuring an application profile, the SDK detects a device is jail broken based on configured policy and if the OAAM handler is configured the plug-in can allow or block access to client device depending on the OAAM policy as well as detect blacklisted, lost or stolen devices and send a wipeout command that deletes all the mobile &social relevant data and blocks the device from future access. 1024x768 Social Logins Finally, let's complete this post by adding configuration to configure social logins for mobile applications. Although the Avitek sample apps do not demonstrate social logins this would be an ideal exercise for you based on the sample code provided in the earlier post. I'll cover the server side configuration here (with Facebook as an example) and you can retrofit the code to accommodate social logins by following the steps outlined in "Invoking Authentication Services" and add code in LoginViewController and maybe create a new delegate - AvitekRPDelegate based on the description in the previous post. So, here all you will need to do is configure an application profile for social login, configure a new service domain that uses the social login application profile, register the app on Facebook and finally configure the Facebook OAuth provider in OAM with those settings. Navigate to Mobile and Social, click on "Internet Identity Services" and create a new application profile. Here are the relevant parameters for the new application profile (-also we're not registering the social user in OAM with this configuration below, however that is a key feature as well): Name:  The application name. This must match the name of the of mobile application profile created for your application under Mobile Services. We used InventoryApp for this example. SharedSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM Mobile and Social service.  Mobile Application Return URL: After the Relying Party (social) login, the OAM Mobile & Social service will redirect to the iOS application using this URI. This is defined under Info->URL type and we used 'osa', so we define this here as 'osa://' Login Type: Choose to allow only internet identity authentication for this exercise. Authentication Service Endpoint : Make sure that /internetidentityauthentication is selected. Login to http://developers.facebook.com using your Facebook account and click on Apps and register the app as InventoryApp. Note that the consumer key and API secret gets generated automatically by the Facebook OAuth server. Navigate back to OAM and under Mobile and Social, click on "Internet Identity Services" and edit the Facebook OAuth Provider. Add the consumer key and API secret from the Facebook developers site to the Facebook OAuth Provider: Navigate to Mobile Services. Click on New to create a new service domain. In this example we call the domain "AvitekDomainRP". The type should be 'Mobile Application' and the application credential type 'User Token'. Add the application "InventoryApp" to the domain. Advance the next page of the wizard. Select the  default service profiles but ensure that the Authentication Service is set to 'InternetIdentityAuthentication'. Finish the creation of the service domain.

    Read the article

  • NetworkManager broken after upgrade to Kubuntu Saucy

    - by queueoverflow
    I had Kubuntu 13.04 on my ThinkPad X220, and I upgraded to 13.10 and I am not able to connect to a wired or wireless connection. The new network tray icon does not show any entries at all. In the menu of the tray icon, there is an error saying: Require NetworkManager 0.9.8, found . I then tried the following: nmcli con ** (process:3695): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 3 matched rules; type="method_call", sender=":1.64" (uid=1000 pid=3695 comm="nmcli con ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=1116 comm="NetworkManager ") Error: nmcli (0.9.8.0) and NetworkManager (unknown) versions don't match. Force execution using --nocheck, but the results are unpredictable. nmcli dev ** (process:3700): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 3 matched rules; type="method_call", sender=":1.65" (uid=1000 pid=3700 comm="nmcli dev ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=1116 comm="NetworkManager ") Error: nmcli (0.9.8.0) and NetworkManager (unknown) versions don't match. Force execution using --nocheck, but the results are unpredictable. nm-tool ** (process:3705): WARNING **: Could not initialize NMClient /org/freedesktop/NetworkManager: Rejected send message, 3 matched rules; type="method_call", sender=":1.66" (uid=1000 pid=3705 comm="nm-tool ") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination="org.freedesktop.NetworkManager" (uid=0 pid=1116 comm="NetworkManager ") NetworkManager Tool State: unknown ** (process:3705): WARNING **: error: could not connect to NetworkManager Running those as root works, however. I was also able to run nmcli con up id DHCP which got my DHCP connection working and giving me internet access. That did not work using a Wifi connection though, and I do need those. How can I get networking back to work without a reinstall?

    Read the article

  • Pointer position way off in Java Application menu's when using gnome-shell

    - by Hailwood
    When using any java application in gnome-shell if the window is maximised the pointer position is way off; but only on the menu's, in the editor, or the side panel, the pointer is fine. This only presents itself when the window is maximized, and it seems that the further away from 0x0 the window is when you maximise it, the bigger the pointer offset. From what I have gathered it has to do with the window not updating it's size when it gets maximised. The other issue is that when a gnome-shell notification appears, when clicking on it, I lose the ability to type in the editor, I can select text etc, but can't give it focus to type. I must bring up some other text input (e.g. right click on a file on the left, select rename, which brings up a rename dialog) after that I can type in the editor again. So, how can I fix this? Below is as much information as I can think to provide $ gnome-shell --version GNOME Shell 3.6.1 $ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) $ file /etc/alternatives/java /etc/alternatives/javac /etc/alternatives/java: symbolic link to '/usr/lib/jvm/java-7-oracle/jre/bin/java' /etc/alternatives/javac: symbolic link to '/usr/lib/jvm/java-7-oracle/bin/javac'

    Read the article

  • Bluetooth adapter turned from working fine to unrecognized

    - by easoncxz
    i had been using bluetooth fine, with devices working, but today when i turned on my computer again bluetooth strangely failed. there is a bluetooth icon on the top bar, showing "bluetooth on", but if i click on the "bluetooth settings" item, a system settings window shows up and shows me a bluetooth on-off switch which is disabled (i.e. fixed to off). more information about my case: i am a new linux used, coming from windows, and do not know supposedly-obvious commands. i am using a laptop. it initially doesn't have bluetooth. i bought a built-in type (instead of USB type) bluetooth module, and added it inside the laptop. hence, i do not have a specific FN+* key for bluetooth. in windows, i needed to install an additional driver that was intended for other machines in my laptop's seires which have built-in (i.e. factoryly built-in)j bluetooth modules. the Fn+* key seemed to only affect wifi under ubuntu. i have been successfully using magicmouse with my later-added built-in bluetooth module/adapter on both windows and ubuntu i have been trying to tweak the magicmouse scrolling speed with commands rmmod something, modprobe hid_magicmouse --scroll_speed=45 --scroll_acceleration=30 or something, then added a file `/etc/modprobe.d/magicmouse.conf". the mouse seemed to be working fine with these changes. now if i run commands like hcitool dev, the shell tells me that i do not have any "Devices" or "adapters". i seem to have bluez installed, because when i type "blue" then tab-autocomplete, a bunch of commands like bluez-test-device pops up. -- update -- some commands and their results: easoncxz@eason-Aspire-4741-ubuntu:/etc$ hcitool dev Devices: easoncxz@eason-Aspire-4741-ubuntu:/etc$ hcitool scan Device is not available: No such device easoncxz@eason-Aspire-4741-ubuntu:/etc$ rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: no Hard blocked: no easoncxz@eason-Aspire-4741-ubuntu:/etc$ rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no

    Read the article

  • lower-case 'c' key not working in bash

    - by gavin
    This is a bit of a strange one. I'm running Ubuntu 12.04. It's been working well but today, I ran into a hell of strange phenomenon. I can no longer type a lower-case 'c' in bash. At first I thought it was a misconfiguration for the gnome terminal but I tried both a stock xterm and directly at the console (ctrl+alt+F1) and the issue was the same. I can type an upper-case C without any difficulty and I can type lower-case 'c' in any other terminal based program (vim, bash, less, etc.). The lower 'c' also works if I jump into plain old sh. I looked at all the configuration files I know of and haven't found anything incriminating in there. I suspect it's not going to be that simple anyway because if I run bash with the '--norc' option from within sh, the problem remains. I don't know what else to check. In fact, if I wanted to cause this problem on a given machine, I have no idea how it could be done. Total mystery.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >