Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 669/1115 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • Printing All Changes to MediaWiki Series of Articles

    - by Jason
    I have a MediaWiki site that I am responsible for. My management has recently asked to see the changes to a specific series of documents within MediaWiki (AKA, they basically want to see the output of the "changes" log). I was wondering two things: Is there a way to "nicely" print out this log so it easily shows the various changes that were made to a document. The information I need to print out is spread across multiple pages. Utilizing whatever information in step 1, is it possible to specify that I print out a subset of pages? (I'm talking about a lot of pages - ~135 of them or so.) Please let me know if you need clarification. Thanks!

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Home PBX to answer/take external calls via PSTN

    - by ageis23
    I have a Thomson 585V6 router which has built in voip support. I want to be able to use a softphone to make calls. for example phone my dad's mobile. Any incomming calls to my normal bt number should be taken via my pc as well. What I have done so far: I've wired the pstn port on the router to the telephone jack. The router is connected to my pc. I have installed asterisk on the pc I want to take calls on. The sip client authenticates to the sip server. output from twinkle: Sun 22:46:45 home, registration succeeded (expires = 3600 seconds) how do I take external calls/ answer incoming calls from pstn?

    Read the article

  • SQL Server Migration Assistant for Oracle problem

    - by Paul
    I've recently installed SSMA on my computer and after connecting to both the Oracle instance (which holds the database to be converted) and the SQL Server. I've mapped the needed schemas from oracle to mssql. The problem is that when i click on the report button for the assessment report there's an error popping up: Assesment Error : Nothing to Process The output window states: Starting conversion... Analyzing metadata... Conversion finished with 0 errors, 0 warnings, and 0 informational messages. There is nothing to process. Has anyone got experience with SSMA. I can't figure out what I am doing wrong. Thank you.

    Read the article

  • 3 Monitors, 1 graphics card

    - by mikelbring
    I have a Nvidia GT 120 graphics card. It has a VGA, DVI and DMI output. Can I have 3 monitors with this? One for each port. Also if not, can I use DVI and HDMI and have 2 monitors. I know I can do 2 with the VGA and DVI but I am wondering about DVI and HDMI. Also my last question is, if I can have 2 monitors with the DVI and HDMI, can I use a HDMI to DVI cable and still have the same success?

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • How can I get ssh-agent working over ssh and in tmux (on OS X)?

    - by Rich
    I have a private key set up for my github account, the passphrase to which is, I believe, stored in OS X's keychain. I certainly don't have to type it in when I open a terminal window and enter ssh [email protected]. However, when I'm running bash over an ssh session, or locally inside a tmux session, I have to type in the passphrase every single time I attempt to ssh to github. This question suggests that a similar problem exists with screen, but I don't really understand the issue well enough to fix it in tmux. There's also this page which includes a fairly complicated solution, but for zsh. EDIT: In response to @Mikel's answer, from a local terminal I get the following output: [~] $ echo $SSH_AUTH_SOCK /tmp/launch-S4HBD6/Listeners [~] $ ssh-add -l 2048 [my key fingerprint] /Users/richie/.ssh/id_rsa (RSA) [~] $ typeset -p SSH_AUTH_SOCK declare -x SSH_AUTH_SOCK="/tmp/launch-S4HBD6/Listeners" Whereas over ssh or in tmux I get: [~] $ echo $SSH_AUTH_SOCK [~] $ ssh-add -l Could not open a connection to your authentication agent. [~] $ typeset -p SSH_AUTH_SOCK bash: typeset: SSH_AUTH_SOCK: not found echo $SSH_AGENT_PID returns nothing whatever shell I run it from.

    Read the article

  • Windows: "net use" equivalent that shows the username used to mount a share?

    - by Jeroen Wiert Pluimers
    Sometimes you need to use different credentials to map different shares, for instance net use z: \\myserver\myshare /user:mydomain\myusername mypassword You can use net use to show you the shares that are mapped, but it does not show the mydomain\myusername. What can I use to show that information? I'm looking for output like this from the imaginary command NetUseX: H:\>NetUseX New connections will be remembered. Status Local Remote Username Network ---------------------------------------------------------------------------------------------------------- OK D: \\MyServer1234\SomeData MyServer1234\MyUserA Microsoft Windows Network Unavailable E: \\MyServer4321\SomeApps MyDomain\MyUserB Microsoft Windows Network OK H: \\MyServer4321\HomeDAta MyDomain\MyUserB Microsoft Windows Network Disconnected W: \\MyServer6789\WorkData MyDomain\MyUserB Microsoft Windows Network OK \\MyServer9876\Shortcuts MyServer9876\MyUserC Microsoft Windows Network The command completed successfully.

    Read the article

  • RewriteRule applying pattern even though 1 of the RewriteCond's failed

    - by BHare
    #www. domain . tld RewriteCond %{HTTP_HOST} (?:.*\.)?([^.]+)\.(?:[^.]+)$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$ /home/$1/client/media/$2 [L] RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$ /home/$1/www/$2 [L] Here is rewritelog output: #(4) RewriteCond: input='tfnoo.mydomain.org' pattern='(?:.*\.)?([^.]+)\.(?:[^.]+)$' [NC] => matched #(4) RewriteCond: input='/home/mydomain/' pattern='-d' => not-matched #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(2) rewrite 'http://www.mydomain.org/files/images/logo.png' -> '/home/mydomain/www/logo.png' If you note on the 2nd 4 it failed the -d (if directory exists) pattern. Which is correct. mydomain does not have a /home/. Therefore it should never rewrite, atleast according to my understanding that all rewriterules are subject to rewriteconds as logical ANDs.

    Read the article

  • What is logical cohesion, and why is it bad or undesirable?

    - by Matt Fenwick
    From the c2wiki page on coupling & cohesion: Cohesion (interdependency within module) strength/level names : (from worse to better, high cohesion is good) Coincidental Cohesion : (Worst) Module elements are unrelated Logical Cohesion : Elements perform similar activities as selected from outside module, i.e. by a flag that selects operation to perform (see also CommandObject). i.e. body of function is one huge if-else/switch on operation flag Temporal Cohesion : operations related only by general time performed (i.e. initialization() or FatalErrorShutdown?()) Procedural Cohesion : Elements involved in different but sequential activities, each on different data (usually could be trivially split into multiple modules along linear sequence boundaries) Communicational Cohesion : unrelated operations except need same data or input Sequential Cohesion : operations on same data in significant order; output from one function is input to next (pipeline) Informational Cohesion: a module performs a number of actions, each with its own entry point, with independent code for each action, all performed on the same data structure. Essentially an implementation of an abstract data type. i.e. define structure of sales_region_table and its operators: init_table(), update_table(), print_table() Functional Cohesion : all elements contribute to a single, well-defined task, i.e. a function that performs exactly one operation get_engine_temperature(), add_sales_tax() (emphasis mine). I don't fully understand the definition of logical cohesion. My questions are: what is logical cohesion? Why does it get such a bad rap (2nd worst kind of cohesion)?

    Read the article

  • How to split audio into multiple channels from optical S/PDIF or 1/8"?

    - by Josh M.
    I have a motherboard which has an optical S/PDIF output or 1/8". I'd like to "split" that signal into the appropriate channels so that I can then connect that to the wires behind my car's headunit which, in turn, run to the amp. The factory Bose amp just takes a single connector with a million wires running out of it, so that's why I would need to separate the signal into separate channels. On the other end there are four RCA connectors: front left, front right, rear left, rear right. The sub-woofer signal does not require an additional connection. Edit: Revised to include S/PDIF or 1/8".

    Read the article

  • Solaris 11

    - by user9154181
    Oracle has a strict policy about not discussing product features until they appear in shipping product. Now that Solaris 11 is publically available, it is time to catch up. I will be shortly posting articles on a variety of new developments in the Solaris linkers and related bits: 64-bit Archives After 40+ years of Unix, the archive file format has run out of room. The ar and link-editor (ld) commands have been enhanced to allow archives to grow past their previous 32-bit limits. Guidance The link-editor is now willing and able to tell you how to alter your link lines in order to build better objects. Stub Objects This is one of the bigger projects I've undertaken since joining the Solaris group. Stub objects are shared objects, built entirely from mapfiles, that supply the same linking interface as the real object, while containing no code or data. You can link to them, but cannot use them at runtime. It was pretty simple to add this ability to the link-editor, but the changes to the OSnet in order to apply them to building Solaris were massive. I discuss how we came to invent stub objects, how we apply them to build the OSnet in a more parallel and scalable manner, and about the follow on opportunities that have emerged from the new stub proto area we created to hold them. The elffile Utility A new standard Solaris utility, elffile is a variant of the file utility, focused exclusively on linker related files. elffile is of particular value for examining archives, as it allows you to find out what is inside them without having to first extract the archive members into temporary files. This release has been a long time coming. I joined the Solaris group in late 2005, and this will be my first FCS. From a user perspective, Solaris 11 is probably the biggest change to Solaris since Solaris 2.0. Solaris 11 polishes the ground breaking features from Solaris 10 (DTrace, FMA, ZFS, Zones), and uses them to add a powerful new packaging system, numerous other enhacements and features, along with a huge modernization effort. I'm excited to see it go out into the world. I hope you enjoy using it as much as we did creating it. Software is never done. On to the next one...

    Read the article

  • PHP include_path doesn't work

    - by 50ndr33
    I have the documents at http://www.example.com/ in /home/www/example.com/www running on Debian Squeeze. /home/www/example.com/ www/ index.php php/ include_me.php In the php.ini I've uncommented and changed to: include_path =".:/home/www/example.com" In a script index.php in www, I have require_once("/php/include_me.php"). The output I am getting from PHP is: Warning: require_once(/php/include_me.php) [function.require-once]: failed to open stream: No such file or directory in /home/www/example.com/www/index.php on line 2 Fatal error: require_once() [function.require]: Failed opening required '/php/include_me.php' (include_path='.:/home/www/example.com') in /home/www/example.com/www/index.php on line 2 As you can see, the include-path is set correctly according to the error. But if I do require_once("../php/include_me.php");, it works. Therefore, something has to be wrong with the include-path. Does anyone know what I can do to fix it?

    Read the article

  • How to bulk mail-enable contacts from AD in Exchange 2007?

    - by George Hewitt
    Hello, We have several thousand 'contacts' setup in AD already for a faxing system. We're migrating to an online fax provider that uses e-mail rather than plain old telephone. So, we've bulk edited all the AD records so that the 'mail' attribute is populated with the right e-mail address in the right format. Now, how do we enable these contacts within Exchange 2007? I've looked through http://technet.microsoft.com/en-us/library/bb684891.aspx but that only seems to talk about manually editing the CSV output to specify the external addresses. AD already knows the external e-mail addresses - I just need the info in Exchange! Any thoughts?

    Read the article

  • Sortable & Filterable PrimeFaces DataTable

    - by Geertjan
    <h:form> <p:dataTable value="#{resultManagedBean.customers}" var="customer"> <p:column id="nameHeader" filterBy="#{customer.name}" sortBy="#{customer.name}"> <f:facet name="header"> <h:outputText value="Name" /> </f:facet> <h:outputText value="#{customer.name}" /> </p:column> <p:column id="cityHeader" filterBy="#{customer.city}" sortBy="#{customer.city}"> <f:facet name="header"> <h:outputText value="City" /> </f:facet> <h:outputText value="#{customer.city}" /> </p:column> </p:dataTable> </h:form> That gives me this: And here's the filter in action: Behind this, I have: import com.mycompany.mavenproject3.entities.Customer; import java.io.Serializable; import java.util.List; import javax.annotation.PostConstruct; import javax.ejb.EJB; import javax.faces.bean.RequestScoped; import javax.inject.Named; @Named(value = "resultManagedBean") @RequestScoped public class ResultManagedBean implements Serializable { @EJB private CustomerSessionBean customerSessionBean; public ResultManagedBean() { } private List<Customer> customers; @PostConstruct public void init(){ customers = customerSessionBean.getCustomers(); } public List<Customer> getCustomers() { return customers; } public void setCustomers(List<Customer> customers) { this.customers = customers; } } And the above refers to the EJB below, which is a standard EJB that I create in all my Java EE 6 demos: import com.mycompany.mavenproject3.entities.Customer; import java.io.Serializable; import java.util.List; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @Stateless public class CustomerSessionBean implements Serializable{ @PersistenceContext EntityManager em; public List getCustomers() { return em.createNamedQuery("Customer.findAll").getResultList(); } } Only problem is that the columns are only sortable after the first time I use the filter.

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by kss
    sudo apt-get install acroread, i got the following output Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: libldap2 libgnome-speech7 The following NEW packages will be installed: acroread 0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.1 MB of archives. After this operation, 142 MB of additional disk space will be used. (Reading database ... 237901 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./opt/Adobe/Reader9/Browser/intellinux/nppdf.so': No space left on device No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for man-db ... /usr/bin/mandb: can't write to /var/cache/man/1645: No space left on device Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • How do I understand the partition table? (I want to start over.)

    - by Sammy Black
    I have Ubuntu 10.04 Lucid installed through wubi on my laptop (it came with Windows 7 preinstalled). This was my first foray into Linux, and I'm here to stay. I have no use for Windows, and yet I must manually choose not to boot into it! Should I shrink the Windows partition to something negligible and grow the Linux one using something like gparted or fdisk, and just be content that everything runs? In that case, I need to understand the filesystems. Which is which? Here's the output of $ df -h: Filesystem Size Used Avail Use% Mounted on /dev/loop0 17G 11G 4.5G 71% / none 1.8G 300K 1.8G 1% /dev none 1.8G 376K 1.8G 1% /dev/shm none 1.8G 316K 1.8G 1% /var/run none 1.8G 0 1.8G 0% /var/lock none 1.8G 0 1.8G 0% /lib/init/rw /dev/sda3 290G 50G 240G 18% /host I would prefer to start over with a clean install of 10.10 Maverick, but I fear what I may lose. Certainly, I will backup my home directory tree (gzip?), but what about various pieces of software that I've acquired from the repositories? Can I keep a record of them? By the way, I asked a similar question over on Ubuntu forums.

    Read the article

  • Solaris x86: Non working keys on keyboard (<, >, #, |)

    - by Thomas
    Hello everyone, I have a new installation of Solaris 10 10/09 on x86 hardware. The attached keyboard has a normal German layout. The system is configured accordingly by 'kbd -s'. The generic keys (letter, number, umlaut) work fine. Unfortunately some keys like <, , | or # do not. They produce no output on the text console at all. I tried PS/2 and USB keyboards. I cannot test it under X11 as it is currently not working. Thanks.

    Read the article

  • Making my computer an iPhone music dock?

    - by deddebme
    Are there any application which lets me play the music in my iPhone through the USB cable? It will be very convenient if I can just dock the phone (while the music is playing), and the music will come out from the computer's speakers instead of the speaker in the iPhone. Edit 1: Stock iPhone dock does have a line-out jack, thanks SidneySM for reminding me. Now the problem is, even though I have select Line In as the Sound Input in Mac OSX, there is no sound coming out from the speaker. How do I make the Speaker output the Line In audio?

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • How to determine main movie DVD track before ripping via mencoder

    - by Ampp3
    Maybe there's a simple answer for this, but when looking at the files on a DVD (IFOs, VOBs,etc), is there a way to easily determine the longest/main track? I'm trying to automate the process of finding the main movie track on a DVD and am running into issues. I thought this could be done by finding the BIGGEST track (look through VTS_XX_N.VOB files, where XX is the track number, and find the track with the largest filesize (sum sizes of VOB files for that track)), but apparently that isn't correct. One DVD had track 7 as the largest track (by my method), but mencoder didn't produce the correct output with this track, but worked with track 9 instead. Am I missing something? EDIT: I've heard of the utility 'lsdvd' for getting track information, but I was hoping to avoid compiling this, and use a basic method instead (ie: what I tried above). Does anyone have any idea WHY my idea didn't work?

    Read the article

  • foreign-architecture

    - by speedy-MACHO
    Always when I install something, I get the following error multiple times: Unknown configuration key 'foreign-architecture' found in your 'dpkg' configuration files. This warning will become a hard error at a later date, so please remove the offending configuration options and replace them with 'dpkg --add-architecture' invocations at the command line. When I try dpkg --add-architecture I get: Unknown configuration key `foreign-architecture' found in your `dpkg' configuration files. This warning will become a hard error at a later date, so please remove the offending configuration options and replace them with `dpkg --add-architecture' invocations at the command line. dpkg: error: --add-architecture takes one argument Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! I've no problems yet, but since it says This warning will become a hard error at a later date I better do something about this. When I search 'foreign-architecture', I find an empty file, containing not a single byte. I somehow can't delete that file. Please help, it's a kind of creapy...

    Read the article

  • What is SOA ?

    - by llaszews
    First, let’s mention what SOA is not: • SOA is not the same thing as web services. Web Services implies the use of standard such as Java/JAX-RPC, .NET or REST. Web Services also implies the use of a WSDL, SOAP, and/or J2EE Connector Architecture (J2EE CA) and HTTP. SOA architectures can be implemented using J2EE CA, XML file transfer or Remote Procedural Call (RPC) over File Transfer Protocol (FTP), TCP/IP, Remote Method Invocation (RMI) or other protocols. In other words, Web Services are a very specific set of technologies. SOA is a concept and can be implemented in many different ways. Some very rudimentary, such as transfering flat files between applications. • SOA will not solve all of your problems. It will make your business more agile, increase business visibility, reduce integration costs and provide better reuse. However, if you don’t need help in these area or expect SOA to cure all of your IT problems, you are looking in the wrong place. • The concepts behind SOA are not new, but SOA is also not mature. SOA as it stands today has really only been around for 5 years. The concepts of standards based protocol handlers, predefined communication schemas and remote method invocation have been around for decades. So, what is SOA? SOA is an architectural blueprint, a way of developing applications, and a set of best practices. SOA is not an ‘out of the box’ solution you buy, install and then have up and running in a matter of months. SOA is a journey to a better way of doing business and the technology architecture to support this better way of doing business. SOA is also a broader set of technologies including more then just web services. Techologies like an Enterpirse Service Bus (ESB), Business Processs Execution Language (BPEL), message queues and Business Activity Monitoring (BAM) all are part of a SOA architecture. So, what is SOA? SOA is an architectural blueprint, a way of developing applications, and a set of best practices. SOA is not an ‘out of the box’ solution you buy, install and then have up and running in a matter of months. SOA is a journey to a better way of doing business and the technology architecture to support this better way of doing business. SOA is also a broader set of technologies including more then just web services. Techologies like an Enterpirse Service Bus (ESB), Business Processs Execution Language (BPEL), message queues and Business Activity Monitoring (BAM) all are part of a SOA architecture. Read more here: Oracle Modernization Solutions

    Read the article

  • Amazon EC2 - Unable to connect to MySQL

    - by alexus
    I'm having issue connecting from one VM to another # nmap -p3306 ip-XX-XX-XX-XX.ec2.internal Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-10 17:50 EDT Nmap scan report for ip-XX-XX-XX-XX.ec2.internal (XX.XX.XX.XX) Host is up (0.000033s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 1.05 seconds # in my Security Group I allowed Inbound connectivity via port TCP, portrange 3306 and Source 0.0.0.0/0, so theoratically it should work, but in reality it doesn't( I'm running red hat enterprise linux 7 on both VMs. mariadb.service running fine on another VM and I am able to connect to it locally. DB's: # netstat -anp | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2324/mysqld # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Any ideas what else I missed?

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >