Search Results

Search found 3061 results on 123 pages for 'interfaces'.

Page 39/123 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Notes on Oracle BPM PS6 Adaptive Case Management

    - by gcolman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I have recently been looking at the  latest release of the BPM Case Management feature in the Oracle BPM PS6 release. I had put together some notes to help me gain a better understanding of the context of the PS6 BPM Case Management. Hopefully, this along with the other resources will enable you to gain a clear picture of the flexibility of this feature. Oracle BPM PS6 release includes Case Management capability. This initial release aims to provide: Case Management Framework Integration of Case Management with BPM & SOA suite It is best to regard the current PS6 case management feature as a case management framework. The framework provides the building blocks for creating a case management system that is fully integrated into Oracle BPM suite. As of the current PS6 release, no UI tooling exists to help manage cases or the case lifecycle. Mark Foster has written a good blog which outlines Case Management within PS6 in the following link. I wanted to provide more context on Case Management from my perspective in this blog. PS6 Case Management - High level View BPM PS6 includes “Case” as a first class component in a SOA Suite composite. The Case components (added to the SOA Composite) are created when a BPM process is assigned to a case in JDveloper. The SOA Case component is defined and configured within JDevloper, which allows us to specify the case data structures and metadata such as stakeholders, outcomes, milestones, document stores etc. "Activities" are associated with a case, and become available to be executed via the case apis. Activities are BPM processes, Human Activities or Java call outs. The PS6 release includes some additional database tables to store the case metadata and case instance data (data object, comments, etc…). These new tables are created within the SOA_INFRA schema and the documents associated with that case into a document repository that is configured with the case. One of the main features of Case Management is the control of the case logic through case events and case business rules. A PS6 Case has an associated business rule component, which can be configured to control the availability and execution of activities within the case. The business rules component is able to act upon events that the PS6 Case Management framework generates during the lifecycle of that case. Events are fired during the lifetime of the case (e.g. Case created, activity started, activity ended, note added, document uploaded.) Internal Case state The internal state of a case is represented by the diagram below. This shows the internal states and the transition paths for a Case from one state to the next Each transition in state will create an event that can be enacted upon via the Case rules engine. The internal case state lifecycle is defined as follows Defining a case A Case is created and defined as a component of a JDeveloper BPM project. When you create a Case as part of a BPM project, JDeveloper, creates the following components within the SCA composite: Case component Case component interfaces (WSDL etc) Case Rules component (Oracle Business Rules) Adds the Case Component and Case Rules Component to the BPM SOA composite Case Configuration The following section gives a high level overview of the items that can be configured for a BPM Case. Case Activities A Case is associated with a set of activities that are to be performed as part of that Case. Case activities can be: SOA Human Tasks BPM processes Custom Task (Java Class) Case activities are created from pre-existing BPM process or human tasks, which, once defined, can be configured additionally as Case activities in JDeveloper and made available within the lifecycle of a case. I've described the following configurable components of a case (very!) briefly as: Milestones Milestones are (optional) user defined logical milestones that can be achieved within a case. No activities are associates with a milestone, but milestone attainment can be programmatically set and events raised when milestones are reached Outcomes User defined status of a completed case. An event is fired when an outcome is attained. Case Data Defines the data that will be stored with a case XML schemas define the data that is stored with the case. Case Documents Defines the location of documents that are attached to a case (e.g. WebCenter Content) User Defined Events Optional user defined events that can be fired or captured to drive case processing rules Stakeholders Defines the actors who can participate in the case (roles, users, groups) Defines permissions for individual case permissions (read case, create document etc…) Business Rules Business rules are the main component controlling the flow of a Case Each case has an associated business ruleset Rules are fired on receiving Case events (or User defined events) Life cycle events Milestone events Activity events Data events Document events Comment events User event Managing the Case Managing the lifecycle of a case is achieved in two ways: Managing case logic with Business Rules Managing the case lifecycle via the Case APIs. A BPM Case can be viewed as a set of case data & documents along with the activities that can be performed within a case and also the case lifecycle state expressed as milestones and internal lifecycle state. The management of the case life is achieved though both the configuration of business rules and the “manual” interaction with a case instance through the Case APIs. Business Rules and Case Events A key component within the Case management framework is the event model. The BPM Case Management solution internally utilizes Oracle EDN (Event Delivery Network) to publish and subscribe to events generated by the Case framework. Events are generated by the Case framework on each of the processes and stages that a case instance will travel on its lifetime. The following case events are part of the BPM Case: Life cycle events Milestone events Activity events Data events Document events Comment events User event The Case business rules are configured to listen for these events, and business logic can be coded into the Case rules component to enact upon an event being received. Case API & Interaction Along with the business rules component, Cases can be managed via the Case API interfaces. These interfaces allow for the building of custom applications to integrate into case management framework. The API’s allow for updating case comments & documents, executing case activities, updating milestones etc. As there is no in built case management UI functions within the PS6 release, Cases need to be managed via a custom built UI, interacting with selected case instances, launching case activities, closing cases etc. (There is expected to be a UI component within subsequent releases) Logical Case Flow The diagram below is intended to depict a logical view of the case steps for a typical case. A UI or other service calls the Case interface to create a Case instance The case instance is created & database data inserted A lifecycle event is raised indicating a case activity (created) event The case business rules capture the event and decide on an action to take Additionally other parties can subscribe to Case events via EDN The business rules may handle the event, e.g. configured to execute a case activity on case creation event The BPM/Human Workflow/Custom activity is executed A case activity event is raised on the execute activity A case work UI or business service can inspect the case instance and call other actions to progress that case, such as: Execute activity Add Note Add document Add case data Update Milestone Raise user defined event Suspend case Resume case Close Case Summary Having had a little time to play around with the APIs and the case configuration, I really like the flexibility and power of combining Oracle Business Rules and the BPM Case Management event model. Creating something this flexible and powerful without BPM Case Management would take a lot of time and effort. This is hopefully going to save my customers a lot of time and effort! I may make amendments to this post as my understanding of Case Management increases! Take a look at the following links for official documentation etc. http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm https://blogs.oracle.com/bpm/entry/just_in_case Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • alert(line) alerts 'ac' and typeof(line) is 'string', but charAt is not a function

    - by Delirium tremens
    alert(line) alerts 'ac' typeof(line) is 'string' When I run line.charAt(0), charAt is not a function. When line is 'http://www.google.com/', it works, I think it's the UTF-8 encoding of the file that I opened... How to make charAt work with UTF-8? UPDATED: http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/src/effective_tld_names.dat?raw=1 is in my extension's chrome folder as effective_tld_names.dat To run the code: authority = 'orkut.com.br'; lines = sc_geteffectivetldnames(); lines = sc_preparetouse(lines); domainname = sc_extractdomainname(authority, lines); The code: function sc_geteffectivetldnames () { var MY_ID = "[email protected]"; var em = Components.classes["@mozilla.org/extensions/manager;1"]. getService(Components.interfaces.nsIExtensionManager); var file = em.getInstallLocation(MY_ID).getItemFile(MY_ID, "chrome/effective_tld_names.dat"); var istream = Components.classes["@mozilla.org/network/file-input-stream;1"]. createInstance(Components.interfaces.nsIFileInputStream); istream.init(file, 0x01, 0444, 0); istream.QueryInterface(Components.interfaces.nsILineInputStream); var line = {}, lines = [], hasmore; do { hasmore = istream.readLine(line); lines.push(line.value); } while(hasmore); istream.close(); return lines; } function sc_preparetouse(lines) { lines = sc_notcomment(lines); lines = sc_notempty(lines); return lines; } function sc_notcomment(lines) { var line; var commentre; var matchedcomment; var replacedlines; replacedlines = new Array(); var i = 0; while (i < lines.length) { line = lines[i]; commentre = new RegExp("^//", 'i'); matchedcomment = line.match(commentre); if(matchedcomment) { lines.splice(i, 1); } else { i++; } } return lines; } function sc_notempty(lines) { var line; var emptyre; var matchedempty; var replacedlines; replacedlines = new Array(); var i = 0; while (i < lines.length) { line = lines[i]; emptyre = new RegExp("^$", 'i'); matchedempty = line.match(emptyre); if(matchedempty) { lines.splice(i, 1); } else { i++; } } return lines; } function sc_extractdomainname(authority, lines) { for (var i = 0; i < lines.length; i++) { line = lines[i]; alert(line); alert(typeof(line)); if (line.chatAt(0) == '*') { alert('test1'); continue; } if (line.chatAt(0) == '!') { alert('test2'); line.chatAt(0) = ''; } alert('test3'); checkline = sc_checknotasteriskline(authority, line); if (checkline) { domainname = checkline; } } if (!domainname) { for (var i = 0; i < lines.length; i++) { line = lines[i]; alert(line); if (line.chatAt(0) != '*') { alert('test4'); continue; } if (line.chatAt(0) == '!') { alert('test5'); line.chatAt(0) = ''; } alert('test6'); checkline = sc_checkasteriskline(authority, line); if (checkline) { domainname = checkline; } } } return domainname; } It alerts 'ac', then 'string', then nothing.

    Read the article

  • alert(line) alerts 'ac' and typeof(line) is 'string', but charAt is not a function

    - by Delirium tremens
    alert(line) alerts 'ac' typeof(line) is 'string' When I run line.charAt(0), charAt is not a function. When line is 'http://www.google.com/', it works, I think it's the UTF-8 encoding of the file that I opened... How to make charAt work with UTF-8? UPDATED: http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/src/effective_tld_names.dat?raw=1 is in my extension's chrome folder as effective_tld_names.dat To run the code: authority = 'orkut.com.br'; lines = sc_geteffectivetldnames(); lines = sc_preparetouse(lines); domainname = sc_extractdomainname(authority, lines); The code: function sc_geteffectivetldnames () { var MY_ID = "[email protected]"; var em = Components.classes["@mozilla.org/extensions/manager;1"]. getService(Components.interfaces.nsIExtensionManager); var file = em.getInstallLocation(MY_ID).getItemFile(MY_ID, "chrome/effective_tld_names.dat"); var istream = Components.classes["@mozilla.org/network/file-input-stream;1"]. createInstance(Components.interfaces.nsIFileInputStream); istream.init(file, 0x01, 0444, 0); istream.QueryInterface(Components.interfaces.nsILineInputStream); var line = {}, lines = [], hasmore; do { hasmore = istream.readLine(line); lines.push(line.value); } while(hasmore); istream.close(); return lines; } function sc_preparetouse(lines) { lines = sc_notcomment(lines); lines = sc_notempty(lines); return lines; } function sc_notcomment(lines) { var line; var commentre; var matchedcomment; var replacedlines; replacedlines = new Array(); var i = 0; while (i < lines.length) { line = lines[i]; commentre = new RegExp("^//", 'i'); matchedcomment = line.match(commentre); if(matchedcomment) { lines.splice(i, 1); } else { i++; } } return lines; } function sc_notempty(lines) { var line; var emptyre; var matchedempty; var replacedlines; replacedlines = new Array(); var i = 0; while (i < lines.length) { line = lines[i]; emptyre = new RegExp("^$", 'i'); matchedempty = line.match(emptyre); if(matchedempty) { lines.splice(i, 1); } else { i++; } } return lines; } function sc_extractdomainname(authority, lines) { for (var i = 0; i < lines.length; i++) { line = lines[i]; alert(line); alert(typeof(line)); if (line.chatAt(0) == '*') { alert('test1'); continue; } if (line.chatAt(0) == '!') { alert('test2'); line.chatAt(0) = ''; } alert('test3'); checkline = sc_checknotasteriskline(authority, line); if (checkline) { domainname = checkline; } } if (!domainname) { for (var i = 0; i < lines.length; i++) { line = lines[i]; alert(line); if (line.chatAt(0) != '*') { continue; alert('test4'); } if (line.chatAt(0) == '!') { line.chatAt(0) = ''; alert('test5'); } alert('test6'); checkline = sc_checkasteriskline(authority, line); if (checkline) { domainname = checkline; } } } return domainname; } It alerts 'ac', then 'string', then nothing.

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user298814
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Craig Mundie's video

    - by GGBlogger
    Timothy recently posted “Microsoft Shows Off Radical New UI, Could Be Used In Windows 8” on Slashdot. I took such grave exception to his post that I found it necessary to my senses to write this blog. We need to go back many years to the days of hand cranked calculators and early main frame computers. These devices had singular purposes – they were “number crunchers” used to make accounting easier. The front facing display in early mainframes was “blinken lights.” The calculators did provide printing – in the form of paper tape and the mainframes used line printers to generate reports as needed. We had other metaphors to work with. The typewriter was/is a mechanical device that substitutes for a type setting machine. The originals go back to 1867 and the keyboard layout has remained much the same to this day. In the earlier years the Morse code telegraphs gave way to Teletype machines. The old ASR33, seen on the left in this photo of one of the first computers I help manufacture, used a keyboard very similar to the keyboards in use today. It also generated punched paper tape that we generated to program this computer in machine language. Everything considered this computer which dates back to the late 1960s has a keyboard for input and a roll of paper as output. So in a very rudimentary fashion little has changed. Oh – we didn’t have a mouse! The entire point of this exercise is to point out that we still use very similar methods to get data into and out of a computer regardless of the operating system involved. The Altair, IMSAI, Apple, Commodore and onward to our modern machines changed the hardware that we interfaced to but changed little in the way we input, view and output the results of our computing effort. The mouse made some changes and the advent of windowed interfaces such as Windows and Apple made things somewhat easier for the user. My 4 year old granddaughter plays here Dora games on our computer. She knows how to start programs, use the mouse, play the game and is quite adept so we have come some distance in making computers useable. One of my chief bitches is the constant harangues leveled at Microsoft. Yup – they are a money making organization. You like Apple? No problem for me. I don’t use Apple mostly because I’m comfortable in the Windows environment but probably more because I don’t like Apple’s “Holier than thou” attitude. Some think they do superior things and that’s also fine with me. Obviously the iPhone has not done badly and other Apple products have fared well. But they are expensive. I just build a new machine with 4 Terabytes of storage, an Intel i7 Core 950 processor and 12 GB of RAMIII. It cost me – with dual monitors – less than 2000 dollars. Now to the chief reason for this blog. I’m going to continue developing software for as long as I’m able. For that reason I don’t see my keyboard, mouse and displays changing much for many years. I also don’t think Microsoft is going to spoil that for me by making radical changes to my developer experience. What Craig Mundie does in his video here:  http://www.ispyce.com/2011/02/microsoft-shows-off-radical-new-ui.html is explore the potential future of computer interfaces for the masses of potential users. Using a computer today requires a person to have rudimentary capabilities with keyboards and the mouse. Wouldn’t it be great if all they needed was hand gestures? Although not mentioned it would also be nice if computers responded intelligently to a user’s voice. There is absolutely no argument with the fact that user interaction with these machines is going to change over time. My personal prediction is that it will take years for much of what Craig discusses to come to a cost effective reality but it is certainly coming. I just don’t believe that what Craig discusses will be the future look of a Window 8.

    Read the article

  • Silverlight Cream for January 16, 2011 -- #1029

    - by Dave Campbell
    In this Issue: Michael Washington, Jesse Liberty, Deborah Kurata(-2-, -3-, -4-), Sergey Barskiy(-2-), Miroslav Nedyalkov, Jeff Prosise, and Matthias Shapiro(-2-). Above the Fold: Silverlight: "Building a Multi-Page Silverlight LOB Application" Deborah Kurata WP7: "Windows Phone 7 [Controls] Project" Sergey Barskiy Sketchflow: "Sketchflow To Final" Michael Washington From SilverlightCream.com: Sketchflow To Final Check out this post by Michael Washington detailing the Sketchflow he did of his app, and how the final result tracks amazingly well. Windows Phone From Scratch #19 – MVVM Light Toolkit Soup To Nuts #4 Continuing to try to catch up to Jesse Liberty is this post, number 19 in the Windows Phone series and the 4th in that series about MVVMLight, and discussing binding a collection in the ViewModel to a ListBox in the view. Building a Multi-Page Silverlight LOB Application Deborah Kurata has the first 4 parts up (in 2 days) in a 6-part tutorial series she's doing on building a Silverlight LOB app. The first post was an intro and link to the rest as they become available. This 2nd post is getting the app newed up and making sure you've got your head wrapped around multiple pages. Theming a Silverlight Application using Existing Themes Deborah Kurata's next part is about getting started with themes in your app using the themes provided in the toolkit specifically. Theming a Silverlight Application using Custom Themes Deborah Kurata's next tutorial in the series is also about themes, but this time it's about custom themes... or rather customized from a 'standard' one in this case. Adding a New Page to a Multi-Page Silverlight Application Deborah Kurata's last available post in the tutorial series is this one on adding a new page to the app. Windows Phone 7 Project Sergey Barskiy has a pair of posts up about a calendar control that he is building and has out on CodePlex... nice-looking control too! Windows Phone 7 Controls Project Update Sergey Barskiy's second post is an update to the calendar... the biggest update being the ability to use the Toolkit context menu. How to Create Ad Rotator with Telerik TransitionControl and CoverFlow control for Silverlight Miroslav Nedyalkov uses the Telerik TransitionControl and CoverFlow controls to produce a great-looking ad rotator using any ContentControl or ListBox... very nice demo on the page.... Building Touch Interfaces for Windows Phones, Part 2 Jeff Prosise has part 2 of his tutorial series on WP7 Touch Interfaces up... and he's processing touch events directly in this one. Fixing the ListPicker / ScrollViewer Problem in Windows Phone 7 Matthias Shapiro has a couple of posts out that I've missed... this one is on an issue with ListPickers in a ScrollViewer where the listpicker gets hit rather than the scroll, and of course he has a work-around... but you'll need the source for the ListPicker to do it. Embedding a Sound File in Windows Phone 7 app (Silverlight) The next post by Matthias Shapiro is an explanation of embedding a sound file in a WP7 app with 2 conditions: 1) it downloads with your app, and 2) it plays no matter what. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • Building ATLAS (and later Octave w/ ATLAS)

    - by David Parks
    I'm trying to set up ATLAS (so I can later compile octave with ATLAS support). If I'm correct, I still need to build this manually due to the environment specific optimizations. I do see a package for ATLAS, but it looks like it's using the cross platform generic build options (e.g. "it'll be slow"). So, running the configure script as described in the docs seems to go poorly. As a java developer I never do well at making heads or tails of errors in these build processes. Am I missing dependencies (if so is there any documentation on what I need)? allusers@vbubuntu:~/Downloads/atlas3.10.1/build_vbubuntu$ ../configure -b 64 -D c -DPentiumCPS=3000 --with-netlib-lapack-tarfile=/home/allusers/Downloads/lapack-3.5.0.tgz make: `xconfig' is up to date. ./xconfig -d s /home/allusers/Downloads/atlas3.10.1/build_vbubuntu/../ -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu -b 64 -D c -DPentiumCPS=3000 -Si lapackref 1 OS configured as Linux (1) Assembly configured as GAS_x8664 (2) Vector ISA Extension configured as SSE3 (6,448) ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Architecture configured as Corei1 (25) ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Clock rate configured as 2350Mhz ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Maximum number of threads configured as 4 Parallel make command configured as '$(MAKE) -j 4' ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Cannot detect CPU throttling. rm -f config1.out make atlas_run atldir=/home/allusers/Downloads/atlas3.10.1/build_vbubuntu exe=xprobe_comp redir=config1.out \ args="-v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64 -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu" make[1]: Entering directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' cd /home/allusers/Downloads/atlas3.10.1/build_vbubuntu ; ./xprobe_comp -v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64 -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu > config1.out make[2]: gfortran: Command not found make[2]: *** [IRunF77Comp] Error 127 make[2]: g77: Command not found make[2]: *** [IRunF77Comp] Error 127 make[2]: f77: Command not found make[2]: *** [IRunF77Comp] Error 127 Unable to find usable compiler for F77; abortingMake sure compilers are in your path, and specify good compilers to configure (see INSTALL.txt or 'configure --help' for details)make[1]: *** [atlas_run] Error 8 make[1]: Leaving directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' make: *** [IRun_comp] Error 2 ERROR 512 IN SYSCMND: 'make IRun_comp args="-v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64"' mkdir src bin tune interfaces mkdir: cannot create directory ‘src’: File exists mkdir: cannot create directory ‘bin’: File exists mkdir: cannot create directory ‘tune’: File exists mkdir: cannot create directory ‘interfaces’: File exists make: *** [make_subdirs] Error 1 make -f Make.top startup make[1]: Entering directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' Make.top:1: Make.inc: No such file or directory Make.top:325: warning: overriding commands for target `/AtlasTest' Make.top:76: warning: ignoring old commands for target `/AtlasTest' make[1]: *** No rule to make target `Make.inc'. Stop. make[1]: Leaving directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' make: *** [startup] Error 2 mv: cannot move ‘lapack-3.5.0’ to ‘../reference/lapack-3.5.0’: Directory not empty mv: cannot stat ‘lib/Makefile’: No such file or directory ../configure: 450: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 451: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 452: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 453: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 509: ../configure: cannot create lib/Makefile: Directory nonexistent DONE configure

    Read the article

  • 45 minutes to talk about C# [closed]

    - by Philip
    I have the opportunity to give a 45 minute talk on C# in the theory of programming languages class I'm taking. The college teaches Java almost exclusively, so that's what all the students are most familiar with. (There's a little C, assembly, Prolog and LISP as well.) I decide what to talk about. It seems to me the best approach is to focus on a few of the big, obvious differences between C# and Java. I don't intend it to be a recommendation to use C# -- there are reasons to use each, mostly because of their ecosystems. So I want to focus on C# as a language. I don't want to go too fast and end up listing a whole bunch of features without showing their usefulness. My current plan is this: Functions as first class objects. This is, in my opinion, one of the biggest differences between C# and Java. The professor briefly mentioned this notion and showed a LISP example, but many of the students have probably never used it. I can show real world examples where it's made my code more readable. Lambda expressions as concise syntax for anonymous functions. Obviously with examples to show how this is useful. The real hit-home examples will be at the end when it's combined with the rest. I don't see an advantage to first showing the old delegate syntax and then replacing it with lambdas -- most of us won't have ever seen delegates anyway so it would just be confusing. The yield keyword and how it's different from returning an array. I have the impression that a lot of C# developers aren't familiar with how to use this. It will likely be very foreign to Java developers. I have some examples from my own work where it was really useful, such as iterating over a tree traversal, or iterating over neighbors in a graph where the neighbors aren't stored in memory. In both cases, doing it in Java would likely mean returning a complete list -- with yield I can stop iterating if I find what I want early on, without using memory for superfluous lists or arrays. Extension methods as a way to write implementation on interfaces. We'll all be familiar with how interfaces don't allow method implementation, and how this leads to code duplication. I'll show a specific example of this and how the extension method can solve the problem. Demonstrate how the above can be combined by implementing some simple Linq methods and using them. Where, Select, First, maybe more depending on how much time is left. Ideas on which ones might 'hit home' the best? There are other things I could talk about such as generics, value types, properties and more. I haven't yet though of good ways to incorporate these. In the case of generics and value types, the advantages might not be obvious or as relevant. Properties are obviously useful, particularly since we're taught strict JavaBeans here, but I don't know if I could integrate it with the "path to Linq" discussion above without it feeling tacked on. So I'm looking for thoughts on how to talk about C#, and what to talk about. Even minor details. I'm sure there are more experienced C# developers than me here who have good insight about what's really important in the language, and what would miss the point.

    Read the article

  • The Raspberry Pi JavaFX In-Car System (Part 3)

    - by speakjava
    Ras Pi car pt3 Having established communication between a laptop and the ELM327 it's now time to bring in the Raspberry Pi. One of the nice things about the Raspberry Pi is the simplicity of it's power supply.  All we need is 5V at about 700mA, which in a car is as simple as using a USB cigarette lighter adapter (which is handily rated at 1A).  My car has two cigarette lighter sockets (despite being specified with the non-smoking package and therefore no actual cigarette lighter): one in the centre console and one in the rear load area.  This was convenient as my idea is to mount the Raspberry Pi in the back to minimise the disruption to the very clean design of the Audi interior. The first task was to get the Raspberry Pi to communicate using Wi-Fi with the ELM 327.  Initially I tried a cheap Wi-Fi dongle from Amazon, but I could not get this working with my home Wi-Fi network since it just would not handle the WPA security no matter what I did.  I upgraded to a Wi Pi from Farnell and this works very well. The ELM327 uses Ad-Hoc networking, which is point to point communication.  Rather than using a wireless router each connecting device has its own assigned IP address (which needs to be on the same subnet) and uses the same ESSID.  The settings of the ELM327 are fixed to an IP address of 192.168.0.10 and useing the ESSID, "Wifi327".  To configure Raspbian Linux to use these settings we need to modify the /etc/network/interfaces file.  After some searching of the web and a few false starts here's the settings I came up with: auto lo eth0 wlan0 iface lo inet loopback iface eth0 inet static     address 10.0.0.13     gateway 10.0.0.254     netmask 255.255.255.0 iface wlan0 inet static     address 192.168.0.1     netmask 255.255.255.0     wireless-essid Wifi327     wireless-mode ad-ho0 After rebooting, iwconfig wlan0 reported that the Wi-Fi settings were correct.  However, ifconfig showed no assigned IP address.  If I configured the IP address manually using ifconfig wlan0 192.168.0.1 netmask 255.255.255.0 then everything was fine and I was able to happily ping the IP address of the ELM327.  I tried numerous variations on the interfaces file, but nothing I did would get me an IP address on wlan0 when the machine booted.  Eventually I decided that this was a pointless thing to spend more time on and so I put a script in /etc/init.d and registered it with update-rc.d.  All the script does (currently) is execute the ifconfig line and now, having installed the telnet package I am able to telnet to the ELM327 via the Raspberry Pi.  Not nice, but it works. Here's a picture of the Raspberry Pi in the car for testing In the next part we'll look at running the Java code on the Raspberry Pi to collect data from the car systems.

    Read the article

  • Randomely loosing wireless connexion with Cubuntu 12.04

    - by statquant
    I am presently experiencing random disconnections from my wireless network. It looks like it is more and more frequent (however I have not seen any clear pattern). This is killing me... Here is some information that should help (from ubuntu forums). Thanks for reading Machine : Acer Aspire S3 statquant@euclide:~$ lsb_release -d Description: Ubuntu 12.04.1 LTS statquant@euclide:~$ uname -mr 3.2.0-33-generic x86_64 statquant@euclide:~$ sudo /etc/init.d/networking restart * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... statquant@euclide:~$ lspci 02:00.0 Network controller: Atheros Communications Inc. AR9485 Wireless Network Adapter (rev 01) statquant@euclide:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 004: ID 064e:c321 Suyin Corp. Bus 002 Device 003: ID 0bda:0129 Realtek Semiconductor Corp. statquant@euclide:~$ ifconfig wlan0 Link encap:Ethernet HWaddr 74:de:2b:dd:c4:78 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::76de:2bff:fedd:c478/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:913 errors:0 dropped:0 overruns:0 frame:0 TX packets:802 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:873218 (873.2 KB) TX bytes:125826 (125.8 KB) statquant@euclide:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"Bbox-D646D1" Mode:Managed Frequency:2.437 GHz Access Point: 00:19:70:80:01:6C Bit Rate=65 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=56/70 Signal level=-54 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:71 Missed beacon:0 statquant@euclide:~$ dmesg | grep "wlan" [ 17.495866] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 17.498950] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 20.072015] wlan0: authenticate with 00:19:70:80:01:6c (try 1) [ 20.269853] wlan0: authenticate with 00:19:70:80:01:6c (try 2) [ 20.272386] wlan0: authenticated [ 20.298682] wlan0: associate with 00:19:70:80:01:6c (try 1) [ 20.302321] wlan0: RX AssocResp from 00:19:70:80:01:6c (capab=0x431 status=0 aid=1) [ 20.302325] wlan0: associated [ 20.307307] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 30.402292] wlan0: no IPv6 routers present statquant@euclide:~$ sudo lshw -C network [sudo] password for statquant: *-network description: Wireless interface product: AR9485 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 74:de:2b:dd:c4:78 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-33-generic firmware=N/A ip=192.168.1.3 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:c0400000-c047ffff memory:afb00000-afb0ffff statquant@euclide:~$ iwlist scan wlan0 Scan completed : Cell 01 - Address: 00:19:70:80:01:6C Channel:6 Frequency:2.437 GHz (Channel 6) Quality=56/70 Signal level=-54 dBm Encryption key:on ESSID:"Bbox-D646D1" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000125fb152bb Extra: Last beacon: 40020ms ago IE: Unknown: 000B42626F782D443634364431 IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101820003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFF00000000000000000000000000000000000000000000 IE: Unknown: 3D1606080800000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000 And... finally, please note that I did the following (after looking for fixes of similar problems), but unfortunately it did not work sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1

    Read the article

  • ADF Mobile - Update through Web Service (with ADF Business Components)

    - by Shay Shmeltzer
    In my previous blog entry I went over the basics of exposing ADF Business Components through service interfaces, and developing a simple ADF Mobile application that access and fetches data from those services. In this entry we'll dive a bit deeper  and address an update scenario through these web service interfaces. You can see the full demo video at the end of the post. In the first steps I show how to add an explicit method execution to fetch a specific record we want to update on the second page of a flow. For an update you'll be invoking a service method and passing the record you want to update as a parameter. As in many other Web services scenarios, we need to provide a complete object of specific type to the method. The ADF Web service data control helps you here by offering an object of this type that you can drag and drop into your page. The next step is to make sure to fill that object with the values you want to update. In the demo we do this through  coding in a backing bean that shows how to use the AdfmfJavaUtilities utility. The code gets the value from one field, gets a pointer to the parallel update field, and then copy from one to the other. At the end of the bean we manually execute the call to the update method on the Web service. Here is the demo: &amp;amp;amp;amp;&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; Here is the code used in the backing bean in the demo above. package a.mobile;import oracle.adfmf.amx.event.ActionEvent;import javax.el.MethodExpression;import javax.el.ValueExpression;import oracle.adfmf.amx.event.ActionEvent;import oracle.adfmf.framework.api.AdfmfJavaUtilities;import oracle.adfmf.framework.model.AdfELContext;public class backing {    public backing() {    }    public void copyAndUpdate(ActionEvent actionEvent) {        // Add event code here...        AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext();        ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName.inputValue}", String.class);        ValueExpression ve3 =            AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName1.inputValue}", String.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        MethodExpression me = AdfmfJavaUtilities.getMethodExpression("#{bindings.updateDepartmentsView1.execute}", Object.class, new Class[] {});         me.invoke(adfELContext, new Object[] {});        }    }

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Spring 3 DI using generic DAO interface

    - by Peders
    I'm trying to use @Autowired annotation with my generic Dao interface like this: public interface DaoContainer<E extends DomainObject> { public int numberOfItems(); // Other methods omitted for brevity } I use this interface in my Controller in following fashion: @Configurable public class HelloWorld { @Autowired private DaoContainer<Notification> notificationContainer; @Autowired private DaoContainer<User> userContainer; // Implementation omitted for brevity } I've configured my application context with following configuration <context:spring-configured /> <context:component-scan base-package="com.organization.sample"> <context:exclude-filter expression="org.springframework.stereotype.Controller" type="annotation" /> </context:component-scan> <tx:annotation-driven /> This works only partially, since Spring creates and injects only one instance of my DaoContainer, namely DaoContainer. In other words, if I ask userContainer.numberOfItems(); I get the number of notificationContainer.numberOfItems() I've tried to use strongly typed interfaces to mark the correct implementation like this: public interface NotificationContainer extends DaoContainer<Notification> { } public interface UserContainer extends DaoContainer<User> { } And then used these interfaces like this: @Configurable public class HelloWorld { @Autowired private NotificationContainer notificationContainer; @Autowired private UserContainer userContainer; // Implementation omitted... } Sadly this fails to BeanCreationException: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.organization.sample.dao.NotificationContainer com.organization.sample.HelloWorld.notificationContainer; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type [com.organization.sample.NotificationContainer] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} Now, I'm a little confused how should I proceed or is using multiple Dao's even possible. Any help would be greatly appreciated :)

    Read the article

  • EJB3 JNDI Lookup Failure in JEE application client

    - by Hank
    I'm trying to access an EJB3 from a JEE client-application, but keep getting nothing but lookup failures. My JEE Application 'CoreServer' is exposing a number of beans with remote interfaces. I have no problem accessing them from a Web Application deployed on the same Glassfish v3.0.1. Now I'm trying to access it from a client-application: public class Main { public static void main(String[] args) { CampaignControllerRemote bean = null; try { InitialContext ctx = new InitialContext(); bean = (CampaignControllerRemote) ctx.lookup("java:global/CoreServer/CampaignController"); } catch (Exception e) { System.out.println(e.getMessage()); } if (bean != null) { Campaign campaign = bean.get(361); if (campaign != null) { System.out.println("Got "+ campaign); } } } } When I run deploy it to Glassfish and run it from the appclient, I get this error: Lookup failed for 'java:global/CoreServer/CampaignController' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=localhost,orb'sInitialPort=3700 However, that's exactly the same JNDI-name I use when I lookup the bean from the WebApplication (via SessionContext, not InitialContext - does that matter?). Also, when I deploy 'CoreServer', Glassfish reports: Portable JNDI names for EJB CampaignController : [java:global/CoreServer/CampaignController!mvs.api.CampaignControllerRemote, java:global/CoreServer/CampaignController] Glassfish-specific (Non-portable) JNDI names for EJB CampaignController : [mvs.api.CampaignControllerRemote, mvs.api.CampaignControllerRemote#mvs.api.CampaignControllerRemote] I tried all four names, none worked. Is the appclient unable to access beans with (only) Remote interfaces?

    Read the article

  • Fluent interface design and code smell

    - by Jiho Han
    public class StepClause { public NamedStepClause Action1() {} public NamedStepClause Action2() {} } public class NamedStepClause : StepClause { public StepClause Step(string name) {} } Basically, I want to be able to do something like this: var workflow = new Workflow().Configure() .Action1() .Step("abc").Action2() .Action2() .Step("def").Action1(); So, some "steps" are named and some are not. The thing I do not like is that the StepClause has knowledge of its derived class NamedStepClause. I tried a couple of things to make this sit better with me. I tried to move things out to interfaces but then the problem just moved from the concrete to the interfaces - INamedStepClause still need to derive from IStepClause and IStepClause needs to return INamedStepClause to be able to call Step(). I could also make Step() part of a completely separate type. Then we do not have this problem and we'd have: var workflow = new Workflow().Configure() .Step().Action1() .Step("abc").Action2() .Step().Action2() .Step("def").Action1(); Which is ok but I'd like to make the step-naming optional if possible. I found this other post on SO here which looks interesting and promising. What are your opinions? I'd think the original solution is completely unacceptable or is it? By the way, those action methods will take predicates and functors and I don't think I want to take an additional parameter for naming the step there. The point of it all is, for me, is to only define these action methods in one place and one place only. So the solutions from the referenced link using generics and extension methods seem to be the best approaches so far.

    Read the article

  • Finding JNP port in JBoss from Servlet

    - by Steve Jackson
    I have a servlet running in JBoss (4.2.2.GA and 4.3-eap) that needs to connect to an EJB to do work. In general this code works fine to get the Context to connect and make RMI calls (all in the same server). public class ContextFactory { public static final int DEFAULT_JNDI_PORT = 1099; public static final String DEFAULT_CONTEXT_FACTORY_CLASS = "org.jnp.interfaces.NamingContextFactory"; public static final String DEFAULT_URL_PREFIXES = "org.jboss.naming:org.jnp.interfaces"; public Context createContext(String serverAddress) { //combine provider name and port String providerUrl = serverAddress + ":" + DEFAULT_JNDI_PORT; //Set properties needed for Context: factory, provider, and package prefixes. Hashtable<String, String> env = new Hashtable<String, String>(3); env.put(Context.INITIAL_CONTEXT_FACTORY, DEFAULT_CONTEXT_FACTORY_CLASS); env.put(Context.PROVIDER_URL, providerUrl); env.put(Context.URL_PKG_PREFIXES, DEFAULT_URL_PREFIXES); return new InitialContext(env); } Now, when I change the JNDI bind port from 1099 in server/conf/jboss-service.xml I can't figure out how to programatically find the correct port for the providerUrl above. I've dumped System.getProperties() and System.getEnv() and it doesn't appear there. I'm pretty sure I can set it in server/conf/jndi.properties as well, but I was hoping to avoid another magic config file. I've tried the HttpNamingContextFactory but that fails "java.net.ProtocolException: Server redirected too many times (20)" env.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.HttpNamingContextFactory"); env.put(Context.PROVIDER_URL, "http://" + serverAddress + ":8080/invoker/JNDIFactory"); Any ideas?

    Read the article

  • Is MEF an all-or-nothing affair?

    - by Dave
    I've had a few questions about MEF recently, but here's the big one -- is it really all-or-nothing, as it appears to be? My basic application structure is simply an app, several shared libraries that are intended to be singletons, and several different plugins (which may implement different interfaces). The app loads the plugins, and both the app and all plugins need to access the shared libraries. My first go at MEF was fairly successful, although I made some stupid mistakes along the way because I was trying so many different things, I just got confused at times. But in the end, last night I got my smallish test app running with MEF, some shared libraries, and one plugin. Now I'm moving onto the target app, which I already described. And it's the multiple plugins part that has be a bit worried. My existing application already supports multiple plugins with different interfaces by using Reflection. I need to be able to uniquely identify each plugin so that the user can select one and get the expected behavior exposed by that plugin. The problem is that I don't know how to do this yet... but that's the topic of a different question. Ideally, I'd be able to take my existing plugin loader and use it as-is, while relying on MEF to do the shared library resolution. The problem is, I can't seem to get MEF to load them (i.e. I get a CompositionException when calling ComposeParts()) unless I also use MEF to load the plugin. And if I do this, well... then I need to know how to keep track of them as they get loaded so the user can select one from a list of plugins. What have your experiences been with trying to mix and match these approaches?

    Read the article

  • validating wsdl/schema using cxf

    - by SGB
    I am having a hard time getting cxf to validate an xml request that my service creates for a 3rd party. My project uses maven. Here is my project structure Main Module : + Sub-Module1 = Application + sub-Module2 = Interfaces In Interfaces, inside src/main/resources I have my wsdl and xsd. so, src/main/resources + mywsdl.wsdl. + myschema.xsd The interface submodule is listed as a dependency in the Application-sub-module. inside Application sub-module, there is a cxsf file in src/maim/resources. <jaxws:client name="{myTargerNameSpaceName}port" createdFromAPI="true"> <jaxws:properties> <entry key="schema-validation-enabled" value="true" /> </jaxws:properties> </jaxws:client> AND:. <jaxws:endpoint name="{myTargetNameSpaceName}port" wsdlLocation="/mywsdl.wsdl" createdFromAPI="true"> <jaxws:properties> <entry key="schema-validation-enabled" value="true" /> </jaxws:properties> </jaxws:endpoint> I tried changing the "name="{myTargetNameSpaceName}port" to "name="{myEndPointName}port" But to no anvil. My application works. But it just do not validate the xml I am producing that has to be consumed by a 3rd party application. I would like to get the validation working, so that any request that I send would be a valid one. Any suggestions?

    Read the article

  • How to maintain a job history using Quartz scheduler

    - by rwwilden
    I'd like to maintain a history of jobs that were scheduled by a Quartz scheduler containing the following properties: 'start time', 'end time', 'success', 'error'. There are two interfaces available for this: ITriggerListener and IJobListener (I'm using the C# naming convention for interfaces because I'm using Quartz.NET but the same question could be asked for the Java version). IJobListener has a JobToBeExecuted and a JobWasExecuted method. The latter provides a JobExecutionException so that you know when something went wrong. However, there is no way to correlate JobToBeExecuted and JobWasExecuted. Suppose my job runs for ten minutes. I start it at t0 and t0+2 (so they overlap). I get two calls to JobToBeExecuted and insert two start times into my history table. When both jobs finish at t1 and t1+2 I get two calls to JobWasExecuted. How do I know what database record to update in each call (to store an end time with its corresponding start time)? ITriggerListener has another problem. There is no way to get any errors inside the TriggerComplete method when a job failed. How do I get the desired behavior?

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • Is there any special tool for interactive GUI development

    - by niko
    Hi, Currently I am preparing exercises about networks and mobile communications for students. I was thinking about creating an interactive user-interface which enables the user to drag&drop predefined elements and then implement a logic based upon element distances etc. An example would be to place two base stations (a predefined element with several properties), set the scale in the interface and then check the signal interferrence in the environment (user-interface). The first part might be too abstract whereas the example might be too specific, but I was wondering whether there already exists any friendly framework or language which enables developers to create interactive interfaces (for teaching/learning purpouses) in short ammount of time. Usually I write applications for PC environment in .NET but in this case it would take too much time to create a specific interface for every exercise. I would appreciate if anyone could suggest any way to create interactive user-interface in short ammount of time. Are there any special programming languages or development tools for this kind of applications or are there any useful frameworks for .NET, Java or any other language to speed up the development of user-interfaces? Thank you!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >