Search Results

Search found 2523 results on 101 pages for 'communication'.

Page 71/101 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • JSR 360 and JSR 361: A Big Leap for Java ME 8

    - by terrencebarr
    It might have gone unnoticed to some, but Java ME took a big leap forward a couple of weeks ago with the filing of two new JSRs: JSR 360: “Connected Limited Device Configuration 8″ (aka CLDC 8) JSR 361: “Java ME Embedded Profile” (aka ME EP) Together, these two JSRs will significantly update, enhance, and modernize the Java ME platform, and specifically small embedded Java, with a host of new features and functionality. JSR 360 – Connected Limited Device Configuration 8 CLDC 8 is based on JSR 139 (CLDC 1.1) and updates the core Java ME VM, language support, libraries, and features to be aligned with Java SE 8. This will include: VM updated to comply with the JVM language specification version 2 Support for SE 7/8 language features like Generics, Assertions, Annotations, Try-with-Resources, and more New libraries such as Collections, NIO subset, Logging API subset A consolidated and enhanced Generic Connection Framework for multi-protocol I/O With CLDC 8, Java ME and Java SE are entering their next phase of alignment – making Java the only technology today that truly scales application development, code re-use, and tooling across the whole range of IT platforms, from small embedded to large enterprise. JSR 361 – Java ME Embedded Profile ME EP is based on JSR 228 (IMP-NG) and updates the specification in key areas to provide a powerful and flexible application environment for small embedded Java platforms, building on the features of CLDC 8:  A new, lightweight component and services model Shared libraries Multi-application concurrency, inter-application communication, and event system Application management API optionality, to address low-footprint use cases With ME EP, application developers will have a modern application environment which allows development and deployment of  modular, robust, sophisticated, and footprint-optimized solutions for a wide range of embedded use cases and devices. Summary While these JSRs are still under development, it’s clear that there are exciting new times ahead for Java ME – turning into a serious application platform while maintaining the focus on resource-constrained devices to address the expected explosion of small, smart, and connected embedded platforms. To learn more, click on the above links for JSR 360 and JSR 361. Or review the JavaOne 2012 online presentations on the topic: CON11300: Expanding the reach of the Java ME Platform CON5943: Java ME 8 Service Platform And stay tuned for more in this space! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "jsr 360", "jsr 361", "me 8", embedded, Embedded Java, JCP

    Read the article

  • Oracle Tutor: Are Documented Policies and Procedures Necessary?

    - by emily.chorba(at)oracle.com
    People refer to policies and procedures with a variety of expressions including business process documentation, standard operating procedures (SOPs), department operating procedures (DOPs), work instructions, specifications, and so on. For our purpose here, policies and procedures mean a set of documents that describe an organization's policies (rules) for operation and the procedures (containing tasks performed by individuals) to fulfill the policies. When an organization documents policies and procedures properly, they can be the strategic link between an organization's vision and its daily operations. Policies and procedures are often necessary because of some external requirement, such as environmental compliance or other governmental regulations. One example of an external requirement would be the American Sarbanes-Oxley Act, requiring full openness in accounting practices. Here are a few other examples of business issues that necessitate writing policies and procedures: Operational needs -- policies and procedures ensure fundamental processes are performed in a consistent way that meets the organization's needs. Risk management -- policies and procedures are identified by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as a control activity needed to manage risk. Continuous improvement -- Procedures can improve processes by building important internal communication practices. Compliance -- Well-defined and documented processes (i.e. procedures, training materials) along with records that demonstrate process capability can demonstrate an effective internal control system compliant with regulations and standards. In addition to helping with the above business issues, policies and procedures can support the basic needs of employees and management. Well documented and easy to access policies and procedures: allow employees to understand their roles and responsibilities within predefined limits and to stay on the accepted path indentified by the organization's management provide clarity to the reader when dealing with accountability issues or activities that are of critical importance allow management to guide operations without constant intervention allow managers to control events in advance and prevent employees from making costly mistakes Can you think of another way organizations can meet the above needs of management and their employees in place of documented Policies and Procedures? Probably not, but we would love your feedback on this question. And that my friends, is why documented policies and procedures are very necessary. Learn MoreFor more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • How can fix HDMI HDTV overscan when I my TV has no aspect ratio setting?

    - by Colin Dean
    I have a 32" Vizio HDTV. It's a few years old, but running well. I just connected a new nettop to it using HDMI out. It's the Intel 3x00 graphics chipset. I'm seeing overscan, where the resolution in Ubuntu is set to 1280x720, but the TV itself is 1366x768. When I go into the Monitors control applet, I cannot change the resolution to anything other than the current or 640x480. A user had a similar overscan problem, but fixed the overscan by adjusting his TV's aspect ratio settings. I do not have that luxury. Is there a way I can do this without having to delve into xorg.conf or other command-line craziness? I'm more than comfortable doing so, but there must be a cleaner way. I'm running Ubuntu Natty, keeping up with updates and such. Here's the output of lspci: colin@bricktop:~$ lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 12) 00:04.0 Signal processing controller: Intel Corporation Core Processor Thermal Management Controller (rev 12) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • ASP.Net or WPF(C#)?

    - by Rachel
    Our team is divided on this and I wanted to get some third-party opinions. We are building an application and cannot decide if we want to use .Net WPF Desktop Application with a WCF server, or ASP.Net web app using jQuery. I thought I'd ask the question here, with some specs, and see what the pros/cons of using either side would be. I have my own favorite and feel I am biased. Ideally we want to build the initial release of the software as fast as we can, then slow down and take time to build in the additional features/components we want later on. Above all we want the software to be fast. Users go through records all day long and delays in loading records or refreshing screens kills their productivity. Application Details: I'm estimating around 100 different screens for initial version, with plans for a lot of additional screens being added on later after the initial release. We are looking to use two-way communication for reminder and event systems Currently has to support around 100 users, although we've been told to allow for growth up to 500 users We have multiple locations Items to consider (maybe not initially in some cases but in future releases): Room for additional components to be added after initial release (there are a lot of of these... perhaps work here than the initial application) Keyboard navigation Performance is a must Production Speed to initial version Low maintenance overhead Future support Softphone/Scanner integration Our Developers: We have 1 programmer who has been learning WPF the past few months and was the one who suggested we use WPF for this. We have a 2nd programmer who is familiar with ASP.Net and who may help with the project in the future, although he will not be working on it much up until the initial release since his time is spent maintaining our current software. There is me, who has worked with both and am comfortable in either We have an outside company doing the project management, and they are an ASP.Net company. We plan on hiring 1-2 others, however we need to know what direction we are going in first Environment: General users are on Windows 2003 server with Terminal Services. They connect using WYSE thin-clients over an RDP connection. Admin staff has their own PCs with XP or higher. Users are allowed to specify their own resolution although they are limited to using IE as the web browser. Other locations connects to our network over a MPLS connection Based on that, what would you choose and why? I am asking here instead of SO because I am looking for opinions and not answers

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • How to wire finite state machine into component-based architecture?

    - by Pup
    State machines seem to cause harmful dependencies in component-based architectures. How, specifically, is communication handled between a state machine and the components that carry out state-related behavior? Where I'm at: I'm new to component-based architectures. I'm making a fighting game, although I don't think that should matter. I envision my state machine being used to toggle states like "crouching", "dashing", "blocking", etc. I've found this state-management technique to be the most natural system for a component-based architecture, but it conflicts with techniques I've read about: Dynamic Game Object Component System for Mutable Behavior Characters It suggests that all components activate/deactivate themselves by continually checking a condition for activation. I think that actions like "running" or "walking" make sense as states, which is in disagreement with the accepted response here: finite state machine used in mario like platform game I've found this useful, but ambiguous: How to implement behavior in a component-based game architecture? It suggests having a separate component that contains nothing but a state machine. But, this necessitates some kind of coupling between the state machine component and nearly all the other components. I don't understand how this coupling should be handled. These are some guesses: A. Components depend on state machine: Components receive reference to state machine component's getState(), which returns an enumeration constant. Components update themselves regularly and check this as needed. B. State machine depends on components: The state machine component receives references to all the components it's monitoring. It queries their getState() methods to see where they're at. C. Some abstraction between them Use an event hub? Command pattern? D. Separate state objects that reference components State Pattern is used. Separate state objects are created, which activate/deactivate a set of components. State machine switches between state objects. I'm looking at components as implementations of aspects. They do everything that's needed internally to make that aspect happen. It seems like components should function on their own, without relying on other components. I know some dependencies are necessary, but state machines seem to want to control all of my components.

    Read the article

  • Ubuntu turns off instead of suspend / sleeping

    - by Marcos
    I'm using Ubuntu 11.10 on my notebook as the main OS. How ever, I've been facing a serious problem: Every time I try to suspend it, or it just stays idle for a while, the computer shuts down, instead of suspending. It actually seems to be suspended, but when I press the button the awake it, it turns on, the open works are all lost. How can i fix this? P.s: On windows 7 the suspend / sleep resumes just fine. Here is a complete list of hardware: 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) 00:1a.0 USB Controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b4) 00:1c.1 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 [8086:1c12] (rev b4) 00:1c.2 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 [8086:1c14] (rev b4) 00:1d.0 USB Controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) 02:00.0 System peripheral [0880]: JMicron Technology Corp. SD/MMC Host Controller [197b:2382] (rev 80) 02:00.2 SD Host controller [0805]: JMicron Technology Corp. Standard SD Host Controller [197b:2381] (rev 80) 02:00.3 System peripheral [0880]: JMicron Technology Corp. MS Host Controller [197b:2383] (rev 80) 02:00.5 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 03) I've updated the kernel to 3.2 still and hibernate or suspend is still not working. SWAP is 1/4 of my RAM.

    Read the article

  • Social Targeting: Who Do You Think You’re Talking To?

    - by Mike Stiles
    Are you the kind of person that tries to sell Clay Aiken CD’s outside Warped Tour concert venues? Then you don’t think a lot about targeting your messages to the right audience. For your communication to pack the biggest punch it can, you need to know where to throw it. And a recent study on social demographics might help you see social targeting in a whole new light. Pingdom’s annual survey of social network demographics shows us first of all that there is no gender difference between Facebook and Twitter. Both are 40% male, 60% female. If you’re looking for locales that lean heavily male, that would be Slashdot, Hacker News and Stack Overflow. The women are dominating Pinterest, Goodreads and Blogger. So what about age? 55% of tweeters are 35 and up, compared with 63% at Pinterest, 65% at Facebook and 70% at LinkedIn. As you can tell, LinkedIn supports the oldest user base, with the average member being 44. The average age at Facebook is 51, and it’s 37 at Twitter. If you want to aim younger, have you met Orkut yet? 83% of its users are under 35. The next sites in order as great candidates for the young market are deviantART, Hacker News, Hi5, Github, and Reddit. I know, other than Reddit, many of you might be saying “who?” But the list could offer an opportunity to look at the vast social world beyond Facebook, Twitter and Google+ (which Pingdom did not include in the survey at all due to a lack of accessible data). As for the average age of social users overall: 26% are 25-34 25% are 35-44 19% are 45-54 16% are 18-24  6% are 55-64  5% are 0-17  and 2% are 65 Now you know where you stand on the “cutting edge” scale for a person your age. You’re welcome. Certainly such demographics are a moving target and need to be watched and reassessed on a regular basis to make sure you’re moving in step with the people you want to talk to. For instance, since Pingdom’s survey last year, the age of the average Facebook user has gone up 2 years, while the age of the average Twitter user has gone down 2 years. With the targeting and analytics tools available on today’s social management platforms, there’s little need to market in the dark. Otherwise, good luck with those Clay CD’s.

    Read the article

  • How to set up secure cookie on weblogic server

    - by adejuanc
    WebLogic Server allows a user to securely access HTTPS resources in a session that was initiated using HTTP, without loss of session data. To enable this feature, add AuthCookieEnabled="true" to the WebServer element in config.xml: <WebServer Name="myserver" AuthCookieEnabled="true"/>Setting AuthCookieEnabled to true, which is the default setting, causes the WebLogic Server instance to send a new secure cookie, _WL_AUTHCOOKIE_JSESSIONID, to the browser when authenticating via an HTTPS connection. Once the secure cookie is set, the session is allowed to access other security-constrained HTTPS resources only if the cookie is sent from the browser.Thus, WebLogic Server uses two cookies: the JSESSIONID cookie and the _WL_AUTHCOOKIE_JSESSIONID cookie. By default, the JSESSIONID cookie is never secure, but the _WL_AUTHCOOKIE_JSESSIONID cookie is always secure. A secure cookie is only sent when an encrypted communication channel is in use. Assuming a standard HTTPS login (HTTPS is an encrypted HTTP connection), your browser gets both cookies.For subsequent HTTP access, you are considered authenticated if you have a valid JSESSIONID cookie, but for HTTPS access, you must have both cookies to be considered authenticated. If you only have the JSESSIONID cookie, you must re-authenticate.To configure on Admin Console : Log into WebLogic Admin Console. Under Domain Structure, press click on <domainname> Select the "Web Applications" tab Select "Lock and Edit" in change center. Click on  "Auth Cookie Enabled" checkbox. Restart to confirm changes. Test an application and view the cookie which got stored as "JSESSIONID" To Configure the Web application's weblogic-application.xml file: Run the following to extract the file from the web application's weblogic-application.xml: $PATH_JDK_HOME\binjar -xvf easy-web-examples.ear META-INF/weblogic-application.xml Add <cookie-secure>true</cookie-secure> between <session-descriptor> </session-descriptor> to the weblogic-application.xml. Run the following to repackage the file to the application: $PATH_JDK_HOME\bin\jar -uvf easy-web-examples.ear META-INF/weblogic-application.xml Deploy the application into WebLogic For further information, please read the documentation on "Using Secure Cookies to Prevent Session Stealing " : http://download.oracle.com/docs/cd/E12840_01/wls/docs103/security/thin_client.html#wp1053780

    Read the article

  • A brief introduction to BRM and architecture

    - by Yani Miguel
    Oracle Communications Billing and Revenue Management (Oracle BRM) is the telcos industry´s leading solution intended for communications service providers. This post encourages to know BRM starting with the basics. History Portal was a billing and revenue managament solution to communications industry created by Portal Software. In 2006 Oracle acquired Portal Software and the solution was renamed BRM. Today Oracle BRM is the first end-to-end packaged enterprise software suite for the communications industry, however BRM is just one more product in the catalog of OSS solutions that Oracle offers. BRM can bill and manage all communications services including wireline, wireless, broadband, cable, voice over IP, IPTV, music, and video. BRM Architecture BRM´s architecture consists of 4 layers or tiers. Through these layers are the data, bussines logic and interfaces to connect graphical client tools.Application tier This layer provides GUI client tools enabling communication to other layers through open APIs. Some BRM client applications are: Customer Center Pricing Center Universal Event Loader Web Server BRM Billing Application Collections Center Permissioning Center Furthermore, this layer is where are provided real-time external events. Bussines Process Tier Although all layers are equally important, I think it deserves more atention because in this tier BRM functionality is implemented. All functions that give life to BRM are in this layer coded in C language called Opcodes (System Processes in the image). Any changes or additional functionality should be made here, so when we try to customize the product, we will most of the time programming in this layer (Business Policies in the image).Bussines Process Tier Features: Implements Portal system functionalityValidates data from the application tierModifies Portal behavior through business policies. Business policies can by customized.Triggers external systems using event notification. Object Tier This layer is responsible for transfer the BRM requests into database language and translate BRM requests into external system requests. Without it, the business logic (data from Bussines Process Tier) could not be understood by the relational database. Data tier Data tier is responsable for the storage of BRM database and other external systems databases. External systems include credit card, tax, and directory servers. Finally, It's important to note that BRM is designed to easily integrate with the following solutions:AIA 2.4 Siebel CRM E-Business Suite - G/L onlyCommunications Services Gatekeeper Oracle BI Publisher. Personally, I think that BRM could improve migrating client-server architecture to a fully web platform that works with Oracle Middleware like any product of the Fusion Middleware family. Hopefully there are already initiatives in this area.

    Read the article

  • Installing Canon LBP6000 in Ubuntu 12.04

    - by MMA
    This is really frustrating. I am trying to install LBP6000 in Ubuntu 12.04 without any success. (Well, I had success about a week back when I first bought the printer and finally printed pages after a struggle of several hours. Then today it suddenly stopped working and I uninstalled everything and started from scratch. Now, I seem to have lost the way.) My steps Downloaded the latest Canon driver from Canon site. File Linux_CAPT_PrinterDriver_V240_uk_EN.tar.gz Got the radu script (I am allowed only two hyperlinks, so can not put the link here. You can Google radu Canon) Changed the /etc/modprobe.d/blacklist-cups-usblp.conf file as instructed in, Official Documentation. (See the section Ubuntu 12.04 Install). Now this file looks like, # cups talks to the raw USB devices, so we need to blacklist usblp to avoid # grabbing them # blacklist usblp Rebooted my machine Changed the port in radu script to 59787 as instructed in the link at step 3. (Again see the section Ubuntu 12.04 Install, or see the comment at How to Install Canon LBP Printers in Ubuntu. Also put the latest deb files from step 1 in the appropriate directory of this script. Ran the radu script. A printer, LBP6000 got added. Not two printers, one to be disabled, as appeared in the message on the terminal after running the script. sudo /etc/init.d/ccpd status shows, Canon Printer Daemon for CUPS: ccpd: 3142 3139 Results The printer does not print. Printer state (from System Setting-Printing, or at cups http interface localhost:631/printers/LBP6000) goes from Idle to Processing, a job appears in print queue, and then the job disappears and the printer state goes back to Idle. The actual printer does not even blink. Diagnostics (got help from the link in step 3, Troubleshooting) captstatusui -P LBP6000 shows communication error lsmod | grep usblp did not show anything. After running, sudo modprobe usblp, shows usblp 17885 0 However, ls -l /dev/usb/lp0 gives, ls: cannot access /dev/usb/lp0: No such file or directory /var/ccpd did not exist, created, sudo mkdir /var/ccpd sudo mkfifo /var/ccpd/fifo0 sudo chown -R lp:lp /var/ccpd Any suggestion will be appreciated. Do not know what to do.

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • OAM11gR2: Enabling SSL in the Data Store

    - by Ekta Malik
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Enabling SSL in the Data Store of OAM11gR2 comprises of the below mentioned steps. Import the certificate/s required for establishing the trust with the Store(backend) in the keystore(cacerts) on the machine hosting OAM's Weblogic Admin server Restart the Weblogic Admin server Specify the <Hostname>:<SSL port> in the "Location" field of the Data Store and select the "Enable SSL" checkbox Pre-requisite:- Certificate/s to be imported are available for import Data Store has already been created using OAM admin console and the connection to the store is successful on non-SSL port( though one can always create a Data Store with SSL settings on the first go) Steps for importing the certificate/s:- One can use the keytool utility that comes bundled with JDK to import the certificate. The step for importing the certificate would be same for self-signed and third party certificates (like VeriSign) $JAVA_HOME/bin/keytool -import -v -noprompt -trustcacerts -alias <aliasname> -file <Path to the certificate file> -keystore $JAVA_HOME/jre/lib/security/cacerts Here $JAVA_HOME refers to the path of JDK install directory Note: In case multiple certificates are required for establishing the trust, import all those certificates using the same keytool command mentioned above  One can verify the import of the certificate/s by using the below mentioned command $JAVA_HOME/bin/keytool -list -alias <aliasname>-v -keystore $JAVA_HOME/jre/lib/security/cacerts When the trust gets established for the SSL communication, specifying the SSL specific settings in the Data Store (via OAM admin console) wouldn't result into the previously seen error (when Certificates are yet to be imported) and the "Test Connection" would be successful.

    Read the article

  • WebCenter Customer Spotlight: Instituto Mexicano de la Propiedad Industrial

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryInstituto Mexicano de la Propiedad Industrial (IMPI) is a decentralized  federal agency with the goals of protecting and ensuring awareness of industrial property rights in Mexico. IMPI  business objectives were to increase efficiency, improve client service, accelerate services to the public and reduce paper use by digitizing management of necessary documentation for patent and trademark submissions and approvals. IMPI  implemented  Oracle WebCenter Content to develop electronic inquiry service by digitizing and managing documents and a public Web site making patent-related information easily available online. With the implemented solution IMPI increased the number of monthly inquires from 200 in person consultations to 80,000 electronic consultations and the number of trademark record inquiries from 30,000 to 300,000. Company OverviewInstituto Mexicano de la Propiedad Industrial (IMPI) is a decentralized federal agency with the goals of protecting and ensuring awareness of industrial property rights in Mexico. IMPI is responsible for registering and publicizing inventions, distinctive signs, trademarks, and patents. In addition to its Mexico City headquarters, IMPI has five regional offices.  Business Challenges IMPI  business objectives were to increase efficiency by automating internal operations and patent and trademark-related procedures and services, improve client service by simplifying patent and trademark procedures, accelerate services to the public and reduce paper use by digitizing management of necessary documentation for patent and trademark submissions and approvals. Solution DeployedIMPI worked with Oracle Consulting to implement Oracle WebCenter Content to develop electronic inquiry service - services that were previously provided in person only - by digitizing and managing documents. They use Oracle Database 11g, Enterprise Edition to manage data for all mission-critical systems, automating patent and trademark transactions, providing consistent, readily available, and accurate data. IMPI developed a Web site to support newly digitized information with simple and flexible interfaces, making patent-related information easily available online to the public. Business ResultsWith the implemented solution IMPI increased the number of monthly inquires  from 200 in person consultations to 80,000 electronic consultations and the number of trademark record inquiries from 30,000 to 300,000. “Oracle WebCenter Content structure is unique. It lets us separately manage communication with other applications and databases, and performs content management itself. It’s a stable tool, at an appropriate cost, that lets us develop and provide reliable electronic services.” Eugenio Ponce de León, Divisional Director of Systems and Technology, Instituto Mexicano de la Propiedad Industrial Additional Information Instituto Mexicano de la Propiedad Customer Snapshot Oracle WebCenter Content

    Read the article

  • I still think Twitter is dead &hellip; but

    - by Randy Walker
    Twitter finally hit the mainstream about 8 months ago, but I’ve been saying for a couple of years now, without a real way for the company to earn money, what’s the future fate of Twitter?  On the personal side, where is the real value for the users?  For the most part, Twitter has replaced most people’s IM (instant messaging), at least in the technology circles I run in.  It still has value for users as a communication tool.  But I see it more as a fad.  My prediction is over the next 6 months we’ll start seeing a usage drop (if we haven’t already started to see it). On the business side, how does Twitter make money?  It doesn’t.  If you use the text messaging capabilities, you see a few ads.  But most smart phone and PC users, won’t ever see them.  I still think Twitter has the best chance to make money by forcing the “collectors” to pay money.  You know what I mean by “collector”, those people that collect tons of followers or friends.  If Twitter caps the number of followers and makes you pay to have more, would you?  The normal twitter user doesn’t have that many followers, and this is where my title comes in … BUT The financial value for Twitter is really seen through businesses connecting with their customers.  I’ve seen 3 effective ways this has been accomplished. 1. Giving your customers a coupon or announcing a sale My favorite is @amazonmp3, Being a huge music lover, I get notified when they put music on sale. Various restaurants like @ruthschris_ARK will let their favorite customers know about certain specials @BluefinMemphis I was traveling through Memphis once looking for a sushi restaurant when they had %50 off if we mentioned we saw them on Twitter.  It was their first attempt at trying to encourage customers in the door, and after talking with the management, it was a huge success 2. Giveaways @namecheap Several companies have started huge marketing campaigns, but my favorite is watching companies post trivia questions, and the first person to respond wins a prize. 3. Responding to Customer Complaints I once posted a complaint about American Express (a company that I have slowly come to really dislike) but they actually had someone contact me to try and resolve the issue.  I give them credit for paying attention, but still dislike them for their horrible credit practices.

    Read the article

  • Say goodbye to System.Reflection.Emit (any dynamic proxy generation) in WinRT

    - by mbrit
    tl;dr - Forget any form of dynamic code emitting in Metro-style. It's not going to happen.Over the past week or so I've been trying to get Moq (the popular open source TDD mocking framework) to work on WinRT. Irritatingly, the day before Release Preview was released it was actually working on Consumer Preview. However in Release Preview (RP) the System.Reflection.Emit namespace is gone. Forget any form of dynamic code generation and/or MSIL injection.This kills off any project based on the popular Castle Project Dynamic Proxy component, of which Moq is one example. You can at this point in time not perform any form of mocking using dynamic injection in your Metro-style unit testing endeavours.So let me take you through my journey on this, so that other's don't have to...The headline fact is that you cannot load any assembly that you create at runtime. WinRT supports one Assembly.Load method, and that takes the name of an assembly. That has to be placed within the deployment folder of your app. You cannot give it a filename, or stream. The methods are there, but private. Try to invoke them using Reflection and you'll be met with a caspol exception.You can, in theory, use Rotor to replace SRE. It's all there, but again, you can't load anything you create.You can't write to your deployment folder from within your Metro-style app. But, can you use another service on the machine to move a file that you create into the deployment folder and load it? Not really.The networking stack in Metro-style is intentionally "damaged" to prevent socket communication from Metro-style to any end-point on the local machine. (It just times out.) This militates against an approach where your Metro-style app can signal a properly installed service on the machine to create proxies on its behalf. If you wanted to do this, you'd have to route the calls through a C&C server somewhere. The reason why Microsoft has done this is obvious - taking out SRE know means they don't have to do it in an emergency later. The collateral damage in removing SRE is that you can't do mocking in test mode, but you also can't do any form of injection in production mode. There are plenty of reasons why enterprise apps might want to do this last point particularly. At CP, the assumption was that their inspection tools would prevent SRE being used as a malware vector - it now seems they are less confident about that. (For clarity, the risk here is in allowing a nefarious program to download instructions from a C&C server and make up executable code on the fly to run, getting around the marketplace restrictions.)So, two things:- System.Reflection.Emit is gone in Metro-style/WinRT. Get over it - dynamic, on-the-fly code generation is not going to to happen.- I've more or less got a version of Moq working in Metro-style. This is based on the idea of "baking" the dynamic proxies before you use them. You can find more information here: https://github.com/mbrit/moqrt

    Read the article

  • Wireless not working with a RaLink RT3090

    - by Promather
    I recently bought a new HP DV6-3118SA laptop, but I am having a very discouraging problem with wireless LAN. It simply doesn't work! Could you please help me with this? Output of lspci -k: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: agpgart-intel Kernel modules: intel-agp 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) Subsystem: Hewlett-Packard Company Device 144a 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: i2c-i801 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: intel ips Kernel modules: intel_ips 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: radeon Kernel modules: radeon 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 02:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe Subsystem: Hewlett-Packard Company Device 1453 Kernel driver in use: rt2800pci Kernel modules: rt2860sta, rt2800pci 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: r8169 Kernel modules: r8169 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a

    Read the article

  • wifi problems with lenovo g580 on kubuntu-13.04-desktop-amd64

    - by user203963
    i have a wifi connection problem in lenovo g580 on kubuntu-13.04-desktop-amd64. ethernet cable is working properly but wifi does'nt connect below are some hardware information sudo lshw -class network gives *-network description: Ethernet interface product: AR8162 Fast Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 10 serial: 20:89:84:3d:e9:10 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm pciexpress msi msix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=alx driverversion=1.2.3 duplex=full firmware=N/A ip=192.168.0.106 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:16 memory:90500000-9053ffff ioport:2000(size=128) *-network description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=bcma-pci-bridge latency=0 resources: irq:17 memory:90400000-90403fff *-network description: Wireless interface physical id: 3 logical name: wlan0 serial: 68:94:23:fa:2c:d9 capabilities: ethernet physical wireless configuration: broadcast=yes driver=brcmsmac driverversion=3.8.0-19-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bgn lsubs gives Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0489:e032 Foxconn / Hon Hai Bus 002 Device 003: ID 04f2:b2e2 Chicony Electronics Co., Ltd lspci gives 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 (rev c4) 00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM76 Express Chipset LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04) 01:00.0 Ethernet controller: Qualcomm Atheros AR8162 Fast Ethernet (rev 10) 02:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Does anyone knows the solution? rfkill list all gives 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 2: ideapad_wlan: Wireless LAN Soft blocked: yes Hard blocked: no 3: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no

    Read the article

  • Wireless Not Working on Ubuntu 10.10

    - by Promather
    I recently bought a new HP DV6-3118SA laptop, but I am having a very discouraging problem with wireless LAN. It simply doesn't work! Could you please help me with this? EDIT: Following @Ronald and @Oli advice, I am dumping the output of lspci -k: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: agpgart-intel Kernel modules: intel-agp 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) Subsystem: Hewlett-Packard Company Device 144a 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: i2c-i801 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: intel ips Kernel modules: intel_ips 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: radeon Kernel modules: radeon 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 02:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe Subsystem: Hewlett-Packard Company Device 1453 Kernel driver in use: rt2800pci Kernel modules: rt2860sta, rt2800pci 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: r8169 Kernel modules: r8169 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a

    Read the article

  • SQL Server MVP Deep Dives 2. The Awesome Returns.

    - by Mladen Prajdic
    Two years ago 59 SQL Server MVP's came together and helped make one of the best book on SQL Server out there. Each chapter was written by an MVP about a part of SQL Server they loved working with. This resulted in superb quality content and excellent ratings from the readers. To top it off all earnings went to a good cause, the War Child International organization. That book was SQL Server MVP Deep Dives. This year 63 SQL Server MVPs, me included, decided it was time do repeat the success of the first book. Let me introduce you the: SQL Server MVP Deep Dives 2 The topics in 60 chapters are grouped in 5 groups: Architecture, Database Administration, Database Development, Performance Tuning and Optimization, Business Intelligence. They represent over 1000 years of daily experience in various areas of SQL Server. I have contributed chapter 28 in Database Development group titled Getting asynchronous with Service Broker. In it I show you the Service Broker template you can use for secure communication between two or more SQL server instances for whatever purpose you may have. If you haven't heard of Service Broker it's a part of the database engine that enables you to do completely async operations in the database itself or between databases and instances. The official release of the book will be next week at PASS where there will be 2 slots where most of the authors will be there signing the books you bring. This is also a great opportunity to meet everyone and ask about any problems you may have. So definitely come say hi. Again we decided on a charity that will be supported by this book. It's called Operation Smile. They provide free surgeries to repair cleft lip, cleft palate and other facial deformities for children around the globe. You can also help them by donating. You can preorder it on at Manning Publications website or on Amazon. By having it you not only get to learn a lot, improve your skills and have fun but you also help a child have a normal life. If that's not a good cause then I don't know what it is.

    Read the article

  • How to create wifihotspot in ubuntu 10.04 LTS

    - by aspdeepak
    I am using ubuntu 10.04 LTS in my lenovo laptop and have a android ICS device. I want to create a wifi-hotspot in ubuntu, which I can later use for connecting android device. I need this setup for capturing the packets from android device and later analysing them using wireshark in my ubuntu. I tried to create a new hotspot using "Create a new wireless Network" wizard from network manager applet, but for some reason the following happens. It breaks the existing internet connection(either the WLAN, or ethernet) Its not visible in the list of available WIFI hotspots in the android device. My Chipset information 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:03.0 Communication controller: Intel Corporation Mobile 4 Series Chipset MEI Controller (rev 07) 00:19.0 Ethernet controller: Intel Corporation 82567LF Gigabit Network Connection (rev 03) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 03:00.0 Network controller: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection 15:00.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 15:00.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 15:00.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 15:00.3 System peripheral: Ricoh Co Ltd R5C843 MMC Host Controller (rev ff) 15:00.4 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 11) 15:00.5 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 11) Supported interface modes: * IBSS * managed * monitor

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >