Search Results

Search found 31994 results on 1280 pages for 'input output'.

Page 209/1280 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • how to allow certain packet with certian destination ports to forward using iptables?

    - by moataz metwally
    i have server and i virualized it into multiple windws vps's using kvm. i would like to make all vps behind the server firewall. to control all the ports of all vps's from the host server.i have tried to do this by that iptables file but it still blocking all the forward packets. when i remove -A FORWARD -j DROP from the file the vps will be out of the firewall control : # Generated by iptables-save v1.4.7 on Mon Oct 21 04:30:35 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [49:7546] -A OUTPUT -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j DROP -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp -m multiport --dports 5901:6010,4080:4085 -j ACCEPT -A FORWARD -p tcp -s 0/0 -d 0/0 --destination-port 3389 -j ACCEPT -A INPUT -j DROP -A FORWARD -j DROP COMMIT # Completed on Mon Oct 21 04:30:35 2013 and my ifconfig output: eth0 Link encap:Ethernet HWaddr 6C:62:6D:EF:B8:77 inet6 addr: fe80::6e62:XXX:feef:b877/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4460000 errors:0 dropped:0 overruns:0 frame:0 TX packets:1825697 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5461498823 (5.0 GiB) TX bytes:547852516 (522.4 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6380 errors:0 dropped:0 overruns:0 frame:0 TX packets:6380 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6481652 (6.1 MiB) TX bytes:6481652 (6.1 MiB) natbr2 Link encap:Ethernet HWaddr 52:54:00:48:72:53 inet addr:88.XXX.XXX.X53 Bcast:88.198.242.159 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1338720 errors:0 dropped:0 overruns:0 frame:0 TX packets:3570844 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:434791198 (414.6 MiB) TX bytes:4321751647 (4.0 GiB) viif1001 Link encap:Ethernet HWaddr FE:16:3E:0F:41:D8 inet6 addr: fe80::fc16:XXX:fe0f:41d8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:358229 errors:0 dropped:0 overruns:0 frame:0 TX packets:479289 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:50127351 (47.8 MiB) TX bytes:261223068 (249.1 MiB) viif1002 Link encap:Ethernet HWaddr FE:16:3E:EA:65:FA inet6 addr: fe80::fc16:XXX:feea:65fa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:575590 errors:0 dropped:0 overruns:0 frame:0 TX packets:1489296 errors:0 dropped:0 overruns:5412 carrier:0 collisions:0 txqueuelen:500 RX bytes:243629668 (232.3 MiB) TX bytes:1724640936 (1.6 GiB) viif1003 Link encap:Ethernet HWaddr FE:16:3E:2B:85:0E inet6 addr: fe80::fc16:XXX:fe2b:850e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:413052 errors:0 dropped:0 overruns:0 frame:0 TX packets:1741801 errors:0 dropped:0 overruns:299 carrier:0 collisions:0 txqueuelen:500 RX bytes:147931054 (141.0 MiB) TX bytes:2338132498 (2.1 GiB) viifbr0 Link encap:Ethernet HWaddr 6C:62:6D:EF:B8:77 inet addr:176.XX.XX.X9 Bcast:176.9.0.95 Mask:255.255.255.224 inet6 addr: fe80::6e62:XXX:feef:b877/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2685666 errors:0 dropped:0 overruns:0 frame:0 TX packets:1472089 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4244043694 (3.9 GiB) TX bytes:523110523 (498.8 MiB)

    Read the article

  • jquery - clone nth row of a table?

    - by John
    I'm trying to use jquery to clone a table row everytime someone presses the add-row button. Can anyone tell me what's wrong with my code? I'm using HTML + smarty templating language in my view. Here's what my template file looks like: <table> <tr> <td>Description</td> <td>Unit</td> <td>Qty</td> <td>Total</td> <td></td> </tr> <tbody id="entries"> {foreach from=$arrItem item=i name=inv} <tr> <td> <input type="hidden" name="invoice_item_id[]" value="{$i.invoice_item_id}"/> <input type="hidden" name="assignment_id[]" value="{$i.assignment_id}" /> <input type="text" name="description[]" value="{$i.description}"/> </td> <td><input type="text" class="unit_cost" name="unit_cost[]" value="{$i.unit_cost}"/></td> <td><input type="text" class="qty" name="qty[]" value="{$i.qty}"/></td> <td><input type="text" class="cost" name="cost[]" value="{$i.cost}"/></td> <td><a href="javascript:void(0);" class="delete-invoice-item">delete</a></td> </tr> {/foreach} </tbody> <tfoot> <tr><td colspan="5"><input type="button" id="add-row" value="add row" /></td></tr> </tfoot> </table> Here's my Jquery Javascript call, which I know gets fired when I put in an alert() statement. So the problem is with me not knowing how jquery works. $('#add-row').live('click', function() {$('#entries tr:nth-child(0)').clone().appendTo('#entries');}); So what am I doing wrong?

    Read the article

  • can't read from stream until child exits?

    - by BobTurbo
    OK I have a program that creates two pipes - forks - the child's stdin and stdout are redirected to one end of each pipe - the parent is connected to the other ends of the pipes and tries to read the stream associated with the child's output and print it to the screen (and I will also make it write to the input of the child eventually). The problem is, when the parent tries to fgets the child's output stream, it just stalls and waits until the child dies to fgets and then print the output. If the child doesn't exit, it just waits forever. What is going on? I thought that maybe fgets would block until SOMETHING was in the stream, but not block all the way until the child gives up its file descriptors. Here is the code: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { FILE* fpin; FILE* fpout; int input_fd[2]; int output_fd[2]; pid_t pid; int status; char input[100]; char output[100]; char *args[] = {"/somepath/someprogram", NULL}; fgets(input, 100, stdin); // the user inputs the program name to exec pipe(input_fd); pipe(output_fd); pid = fork(); if (pid == 0) { close(input_fd[1]); close(output_fd[0]); dup2(input_fd[0], 0); dup2(output_fd[1], 1); input[strlen(input)-1] = '\0'; execvp(input, args); } else { close(input_fd[0]); close(output_fd[1]); fpin = fdopen(input_fd[1], "w"); fpout = fdopen(output_fd[0], "r"); while(!feof(fpout)) { fgets(output, 100, fpout); printf("output: %s\n", output); } } return 0; }

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Adventures in MVVM &ndash; My ViewModel Base

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM First, I’d like to say: THIS IS NOT A NEW MVVM FRAMEWORK. I tend to believe that MVVM support code should be specific to the system you are building and the developers working on it.  I have yet to find an MVVM framework that does everything I want it to without doing too much.  Don’t get me wrong… there are some good frameworks out there.  I just like to pick and choose things that make sense for me.  I’d also like to add that some of these features only work in WPF.  As of Silveright 4, they don’t support binding to dynamic properties, so some of the capabilities are lost. That being said, I want to share my ViewModel base class with the world.  I have had several conversations with people about the problems I have solved using this ViewModel base.  A while back, I posted an article about some experiments with a “Rails Inspired ViewModel”.  What followed from those ideas was a ViewModel base class that I take with me and use in my projects.  It has a lot of features, all designed to reduce the friction in writing view models. I have put the code out on Codeplex under the project: ViewModelSupport. Finally, this article focuses on the ViewModel and only glosses over the View and the Model.  Without all three, you don’t have MVVM.  But this base class is for the ViewModel, so that is what I am focusing on. Features: Automatic Command Plumbing Property Change Notification Strongly Typed Property Getter/Setters Dynamic Properties Default Property values Derived Properties Automatic Method Execution Command CanExecute Change Notification Design-Time Detection What about Silverlight? Automatic Command Plumbing This feature takes the plumbing out of creating commands.  The common pattern for commands in a ViewModel is to have an Execute method as well as an optional CanExecute method.  To plumb that together, you create an ICommand Property, and set it in the constructor like so: Before public class AutomaticCommandViewModel { public AutomaticCommandViewModel() { MyCommand = new DelegateCommand(Execute_MyCommand, CanExecute_MyCommand); } public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } public DelegateCommand MyCommand { get; private set; } } With the base class, this plumbing is automatic and the property (MyCommand of type ICommand) is created for you.  The base class uses the convention that methods be prefixed with Execute_ and CanExecute_ in order to be plumbed into commands with the property name after the prefix.  You are left to be expressive with your behavior without the plumbing.  If you are wondering how CanExecuteChanged is raised, see the later section “Command CanExecute Change Notification”. After public class AutomaticCommandViewModel : ViewModelBase { public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } }   Property Change Notification One thing that always kills me when implementing ViewModels is how to make properties that notify when they change (via the INotifyPropertyChanged interface).  There have been many attempts to make this more automatic.  My base class includes one option.  There are others, but I feel like this works best for me. The common pattern (without my base class) is to create a private backing store for the variable and specify a getter that returns the private field.  The setter will set the private field and fire an event that notifies the change, only if the value has changed. Before public class PropertyHelpersViewModel : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { if(text != value) { text = value; RaisePropertyChanged("Text"); } } } protected void RaisePropertyChanged(string propertyName) { var handlers = PropertyChanged; if(handlers != null) handlers(this, new PropertyChangedEventArgs(propertyName)); } public event PropertyChangedEventHandler PropertyChanged; } This way of defining properties is error-prone and tedious.  Too much plumbing.  My base class eliminates much of that plumbing with the same functionality: After public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get<string>("Text"); } set { Set("Text", value);} } }   Strongly Typed Property Getters/Setters It turns out that we can do better than that.  We are using a strongly typed language where the use of “Magic Strings” is often frowned upon.  Lets make the names in the getters and setters strongly typed: A refinement public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get(() => Text); } set { Set(() => Text, value); } } }   Dynamic Properties In C# 4.0, we have the ability to program statically OR dynamically.  This base class lets us leverage the powerful dynamic capabilities in our ecosystem. (This is how the automatic commands are implemented, BTW)  By calling Set(“Foo”, 1), you have now created a dynamic property called Foo.  It can be bound against like any static property.  The opportunities are endless.  One great way to exploit this behavior is if you have a customizable view engine with templates that bind to properties defined by the user.  The base class just needs to create the dynamic properties at runtime from information in the model, and the custom template can bind even though the static properties do not exist. All dynamic properties still benefit from the notifiable capabilities that static properties do. For any nay-sayers out there that don’t like using the dynamic features of C#, just remember this: the act of binding the View to a ViewModel is dynamic already.  Why not exploit it?  Get over it :) Just declare the property dynamically public class DynamicPropertyViewModel : ViewModelBase { public DynamicPropertyViewModel() { Set("Foo", "Bar"); } } Then reference it normally <TextBlock Text="{Binding Foo}" />   Default Property Values The Get() method also allows for default properties to be set.  Don’t set them in the constructor.  Set them in the property and keep the related code together: public string Text { get { return Get(() => Text, "This is the default value"); } set { Set(() => Text, value);} }   Derived Properties This is something I blogged about a while back in more detail.  This feature came from the chaining of property notifications when one property affects the results of another, like this: Before public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); RaisePropertyChanged("Percentage"); RaisePropertyChanged("Output"); } } public int Percentage { get { return (int)(100 * Score); } } public string Output { get { return "You scored " + Percentage + "%."; } } } The problem is: The setter for Score has to be responsible for notifying the world that Percentage and Output have also changed.  This, to me, is backwards.    It certainly violates the “Single Responsibility Principle.” I have been bitten in the rear more than once by problems created from code like this.  What we really want to do is invert the dependency.  Let the Percentage property declare that it changes when the Score Property changes. After public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public int Percentage { get { return (int)(100 * Score); } } [DependsUpon("Percentage")] public string Output { get { return "You scored " + Percentage + "%."; } } }   Automatic Method Execution This one is extremely similar to the previous, but it deals with method execution as opposed to property.  When you want to execute a method triggered by property changes, let the method declare the dependency instead of the other way around. Before public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); WhenScoreChanges(); } } public void WhenScoreChanges() { // Handle this case } } After public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public void WhenScoreChanges() { // Handle this case } }   Command CanExecute Change Notification Back to Commands.  One of the responsibilities of commands that implement ICommand – it must fire an event declaring that CanExecute() needs to be re-evaluated.  I wanted to wait until we got past a few concepts before explaining this behavior.  You can use the same mechanism here to fire off the change.  In the CanExecute_ method, declare the property that it depends upon.  When that property changes, the command will fire a CanExecuteChanged event, telling the View to re-evaluate the state of the command.  The View will make appropriate adjustments, like disabling the button. DependsUpon works on CanExecute methods as well public class CanExecuteViewModel : ViewModelBase { public void Execute_MakeLower() { Output = Input.ToLower(); } [DependsUpon("Input")] public bool CanExecute_MakeLower() { return !string.IsNullOrWhiteSpace(Input); } public string Input { get { return Get(() => Input); } set { Set(() => Input, value);} } public string Output { get { return Get(() => Output); } set { Set(() => Output, value); } } }   Design-Time Detection If you want to add design-time data to your ViewModel, the base class has a property that lets you ask if you are in the designer.  You can then set some default values that let your designer see what things might look like in runtime. Use the IsInDesignMode property public DependantPropertiesViewModel() { if(IsInDesignMode) { Score = .5; } }   What About Silverlight? Some of the features in this base class only work in WPF.  As of version 4, Silverlight does not support binding to dynamic properties.  This, in my opinion, is a HUGE limitation.  Not only does it keep you from using many of the features in this ViewModel, it also keeps you from binding to ViewModels designed in IronRuby.  Does this mean that the base class will not work in Silverlight?  No.  Many of the features outlined in this article WILL work.  All of the property abstractions are functional, as long as you refer to them statically in the View.  This, of course, means that the automatic command hook-up doesn’t work in Silverlight.  You need to plumb it to a static property in order for the Silverlight View to bind to it.  Can I has a dynamic property in SL5?     Good to go? So, that concludes the feature explanation of my ViewModel base class.  Feel free to take it, fork it, whatever.  It is hosted on CodePlex.  When I find other useful additions, I will add them to the public repository.  I use this base class every day.  It is mature, and well tested.  If, however, you find any problems with it, please let me know!  Also, feel free to suggest patches to me via the CodePlex site.  :)

    Read the article

  • Can't connect to website after altering IPTables

    - by user2833135
    I attempted to open up a port on my VPS, but I can't connect to my website after opening up that port. Below are the commands I issued to open the port. I didn't get an error or anything after setting this up. I just can't connect to my website after doing this. [root@vps ~]# iptables -F [root@vps ~]# iptables -A INPUT -i lo -j ACCEPT [root@vps ~]# iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT [root@vps ~]# iptables -A INPUT -p tcp --dport 25765 -j ACCEPT [root@vps ~]# iptables -P INPUT DROP [root@vps ~]# iptables -P FORWARD DROP [root@vps ~]# iptables -P OUTPUT ACCEPT [root@vps ~]# iptables -L -v Just a side note, but I am running CentOS 6 (64 bit) Thank you in advance.

    Read the article

  • grep on Windows XP vs. Windows 7

    - by cschol
    I am using grep from Gnuwin32 on Windows. On Windows XP, the following grep -e "foo" NUL results in the following output grep: NUL: invalid argument On Windows 7, the same arguments result in no output at all. Why is the output different between Windows XP and Windows 7?

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • IIS serving static content gives 503 at random

    - by Steffen
    We're having a few issues with our image server. It's a Win 2008 running IIS 7.5 and it only serves static content: images. It has run without issues for quite a while, until recently when we disabled Output Caching, as we noticed having it enabled meant it sent no-cache host-headers to the clients (forcing them to fetch the images from the server every time) We've read quite a bit about it, and it seems IIS just works that way - either you use Output Caching or you get to use cache host-headers. Anyway having disabled the Output Cache, we now experience random 5 minutes intervals, where all requests just get a 503 Service Unavailable. During this period the "Files cached" performance counter staggers (neither increased nor decreased) and after the period all caches are flushed. You might find it weird I talk about caching, since we disabled Output Caching. The thing is we changed the ObjectTTL parameter in registry, so we cache files for 3 minutes (which has worked very well, our Disk I/O dropped significantly) So even with Output Caching disabled, we're still caching plenty of files - if we could just get rid of the random 503 it'd be perfect :-D We don't get any messages in the Windows event log during these 503 intervals, so we're pretty stumped as to what to do. Any ideas are very welcome :-)

    Read the article

  • How can I use Windows Workflow for validation of a Silverlight application?

    - by Josh C.
    I want to use Windows Workflow to provide a validation service. The validation that will be provided may have multiple tiers with chaining and redirecting to other stages of validation. The application that will generate the data for validation is a Silverlight app. I imagine the validation will take longer than the blink of an eye, so I don't want to tie the user up. Instead, I would like the user to submit the current data for validation. If the validation happens quickly, the service will perform an asynchronous callback to the app. The viewmodel that made the call would receive the validation output and post into the view. If the validation takes a long time, the user can move forward in the Silverlight app, disregarding the potential output of the validation. The viewmodel that made the call would be gone. I expect there would be another viewmodel that would contain the current validation output in its model. The validation value would change causing the user to get a notification in smaller notifcation area. I can see how the current view's viewmodel would call the validation through the viewmodel that is containing the validation output, but I am concerned that the service call will timeout. Also, I think the user may have already changed the values from the original validation, invalidating the feedback. I am sure asynchronous validation is a problem solved many times over, I am looking to glean from your experience in solving this kind of problem. Is this the right approach to the problem, or is there a better way to approach this?

    Read the article

  • ODI 11g – Expert Accelerator for Model Creation

    - by David Allan
    Following on from my post earlier this morning on scripting model and topology creation tonight I thought I’d add a little UI to make those groovy functions a little more palatable. In OWB we have experts for capturing user input, with the groovy console we open up opportunities to build UI around the scripts in a very easy way – even I can do it;-) After a little googling around I found some useful posts on SwingBuilder, the most useful one that I used for the dialog below was this one here. This dialog captures user input for the technology and context for the model and logical schema etc to be created. You can see there are a variety of interesting controls, and its really easy to do. The dialog captures the users input, then when OK is pressed I call the functions from the earlier post to create the logical schema (plus all the other objects) and model. The image below shows what was created, you can see the model (with typo in name), the model is Oracle technology and references the logical schema ORACLE_SCOTT (that I named in dialog above), the logical schema is mapped via the GLOBAL context to the data server ORACLE_SCOTT_DEV (that I named in dialog above), and the physical schema used was just the user name that I connected with – so if you wanted a different user the schema name could be added to the dialog. In a nutshell, one dialog that encapsulates a simpler mechanism for creating a model. You can create your own scripts that use dialogs like this, capture input and process. You can find the groovy script for this is here odi_create_model.groovy, again I wrapped the user capture code in a groovy function and return the result in a variable and then simply call the createLogicalSchema and createModel functions from the previous posting. The script I supplied above has everything you will need. To execute use Tools->Groovy->Open Script and then execute the green play button on the toolbar. Have fun.

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • Different routing rules for a particular user using firewall mark and ip rule

    - by Paul Crowley
    Running Ubuntu 12.10 on amd64. I'm trying to set up different routing rules for a particular user. I understand that the right way to do this is to create a firewall rule that marks the packets for that user, and add a routing rule for that mark. Just to get testing going, I've added a rule that discards all packets as unreachable: # ip rule 0: from all lookup local 32765: from all fwmark 0x1 unreachable 32766: from all lookup main 32767: from all lookup default With this rule in place and all firewall chains in all tables empty and policy ACCEPT, I can still ping remote hosts just fine as any user. If I then add a rule to mark all packets and try to ping Google, it fails as expected # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 # ping www.google.com ping: unknown host www.google.com If I restrict this rule to the VPN user, it seems to have no effect. # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 -m owner --uid-owner vpn # sudo -u vpn ping www.google.com PING www.google.com (173.194.78.103) 56(84) bytes of data. 64 bytes from wg-in-f103.1e100.net (173.194.78.103): icmp_req=1 ttl=50 time=36.6 ms But it appears that the mark is being set, because if I add a rule to drop these packets in the firewall, it works: # iptables -t mangle -A OUTPUT -j DROP -m mark --mark 0x01 # sudo -u vpn ping www.google.com ping: unknown host www.google.com What am I missing? Thanks!

    Read the article

  • How to rip AVI from VOB using ffmpeg

    - by Linux Jedi
    I am trying to convert a VOB to an AVI. I have ripped an AVI from this VOB before using ffmpeg, but for some reason it's not working this time. This is what I tried: ffmpeg -sameq -acodec copy -i VTS_01_2.VOB output.avi This is the output I get: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 [mpeg2video @ 0x101014200]mpeg_decode_postinit() failure Last message repeated 6 times Input #0, mpeg, from 'VTS_01_2.VOB': Duration: 26:30:29.20, start: 140.171311, bitrate: 90 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x480 [PAR 32:27 DAR 16:9], 9800 kb/s, 31.44 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1[0xa0]: Audio: pcm_s16be, 48000 Hz, 2 channels, s16, 1536 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x480 [PAR 32:27 DAR 16:9], q=2-31, 200 kb/s, 29.97 tbn, 29.97 tbc Stream #0.1: Audio: pcm_s16be, 48000 Hz, 2 channels, 1536 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Could not write header for output file #0 (incorrect codec parameters ?)

    Read the article

  • LibGDX Boid Seek Behaviour

    - by childonline
    I'm trying to make a swarm of boids which seek out the mouse position and move towards it, but I'm having a bit of a problem. The boids just seem to want to go to upper-right corner of the game window. The mouse position seems influence the behavior a bit, but not enough to make the boid turn towards it. I suspect there is a problem with the way LibGDX handles its coordinate system, but I'm not sure how to fix it I've uploaded the eclipse project here! Also here are the relevant bits of my code, in case you see something obviously wrong: public Agent(){ _texture = GdxGame.TEX_AGENT; TextureRegion region = new TextureRegion(_texture, 0, 0, 32, 32); TextureRegion region2 = new TextureRegion(GdxGame.TEX_TARGET, 0, 0, 32, 32); _sprite = new Sprite(region); _sprite.setSize(.05f, .05f); _sprite_target = new Sprite(region2); _sprite_target.setSize(.1f, .1f); _max_velocity = 0.05f; _max_speed = 0.005f; _velocity = new Vector2(0, 0); _desired_velocity = new Vector2(0, 0); _steering = new Vector2(0, 0); _position = new Vector2(-_sprite.getWidth()/2, -_sprite.getHeight()/2); _mass = 10f; } public void Update(float deltaTime){ _target = new Vector2(Gdx.input.getX(), Gdx.input.getY()); _desired_velocity = ((_target.sub(_position)).nor()).scl(_max_velocity,_max_velocity); _steering = ((_desired_velocity.sub(_velocity)).limit(_max_speed)).div(_mass); _velocity = (_velocity.add(_steering)).limit(_max_speed); _position = _position.add(_velocity); _sprite.setPosition(_position.x, _position.y); _sprite_target.setPosition(Gdx.input.getX(), Gdx.input.getY()); } I've used this tutorial here. Thanks!

    Read the article

  • connect 2.1 stereo speakers to LG LCD-TV (5500 series)

    - by rMaero
    I bought a pair of speakers for my dad's TV, LG 32LE5500. When I installed them, it just sounded worse than the integrated ones and that's where I realized the subwoofer didn't work at all and both speakers make lower volume than the internal ones. The audio output jack says "H/P" (standing for headphones, and a matching symbol) before buying I checked this output with my phone's headphones and it worked so I figured it would work with a set of speakers since it's a standard audio output. I guess it's literally for headphones and not any other kind of sound players. There is only one other audio output and it is the optical-digital, so I can't use that. Not at least with these speakers.. am I screwed? or is there any workaround?

    Read the article

  • What is the unit of size we get from using wmic command on windows

    - by Abhishek Simon
    I use a couple of wmic commands and I was wondering how can a user come to know the the unit of any size related command output? For Instance I use the below 2 commands wmic /node:Abhishek-PC cpu get maxclockspeed,l2cachesize,loadpercentage output: L2CacheSize LoadPercentage MaxClockSpeed 8192 1 1595 8192 1 1595 wmic /node:Abhishek-PC LogicalDisk Where DriveType="3" Get DeviceID,Size,FreeSpace output: DeviceID FreeSpace Size C: 13933780992 73300701184 E: 23688204288 73405558784

    Read the article

  • iptables configuration to work with apache2 mod_proxy

    - by swdalex
    Hello! I have iptables config like this: iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --sport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --sport 443 -j ACCEPT Also, I have apache virtual host: <VirtualHost *:80> ServerName wiki.myite.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8901/ ProxyPassReverse / http://localhost:8901/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> My primary domain www.mysite.com is working well with this configuration (I don't use proxy redirect on it). But my virtual host wiki.mysite.com is not responding. Please, help me to setup iptables config to allow wiki.mysite.com working too. I think, I need to setup iptables FORWARDING options, but I don't know how. update: I have 1 server with 1 IP. On server I have apache2.2 on 80 port. Also I have tomcat6 on 8901 port. In apache I setup to forwarding domain wiki.mysite.com to tomcat (mysite.com:8901). I want to secure my server by disabling all ports, except 80, 22 and 443.

    Read the article

  • Custom Content Pipeline with Automatic Serialization Load Error

    - by Direweasel
    I'm running into this error: Error loading "desert". Cannot find type TiledLib.MapContent, TiledLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.InstantiateTypeReader(String readerTypeName, ContentReader contentReader, ContentTypeReader& reader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(String readerTypeName, ContentReader contentReader, List1& newTypeReaders) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 When trying to run the game. This is a basic demo to try and utilize a separate project library called TiledLib. I have four projects overall: TiledLib (C# Class Library) TiledTest (Windows Game) TiledTestContent (Content) TMX CP Ext (Content Pipeline Extension Library) TiledLib contains MapContent which is throwing the error, however I believe this may just be a generic error with a deeper root problem. EMX CP Ext contains one file: MapProcessor.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Content.Pipeline; using Microsoft.Xna.Framework.Content.Pipeline.Graphics; using Microsoft.Xna.Framework.Content.Pipeline.Processors; using Microsoft.Xna.Framework.Content; using TiledLib; namespace TMX_CP_Ext { // Each tile has a texture, source rect, and sprite effects. [ContentSerializerRuntimeType("TiledTest.Tile, TiledTest")] public class DemoMapTileContent { public ExternalReference<Texture2DContent> Texture; public Rectangle SourceRectangle; public SpriteEffects SpriteEffects; } // For each layer, we store the size of the layer and the tiles. [ContentSerializerRuntimeType("TiledTest.Layer, TiledTest")] public class DemoMapLayerContent { public int Width; public int Height; public DemoMapTileContent[] Tiles; } // For the map itself, we just store the size, tile size, and a list of layers. [ContentSerializerRuntimeType("TiledTest.Map, TiledTest")] public class DemoMapContent { public int TileWidth; public int TileHeight; public List<DemoMapLayerContent> Layers = new List<DemoMapLayerContent>(); } [ContentProcessor(DisplayName = "TMX Processor - TiledLib")] public class MapProcessor : ContentProcessor<MapContent, DemoMapContent> { public override DemoMapContent Process(MapContent input, ContentProcessorContext context) { // build the textures TiledHelpers.BuildTileSetTextures(input, context); // generate source rectangles TiledHelpers.GenerateTileSourceRectangles(input); // now build our output, first by just copying over some data DemoMapContent output = new DemoMapContent { TileWidth = input.TileWidth, TileHeight = input.TileHeight }; // iterate all the layers of the input foreach (LayerContent layer in input.Layers) { // we only care about tile layers in our demo TileLayerContent tlc = layer as TileLayerContent; if (tlc != null) { // create the new layer DemoMapLayerContent outLayer = new DemoMapLayerContent { Width = tlc.Width, Height = tlc.Height, }; // we need to build up our tile list now outLayer.Tiles = new DemoMapTileContent[tlc.Data.Length]; for (int i = 0; i < tlc.Data.Length; i++) { // get the ID of the tile uint tileID = tlc.Data[i]; // use that to get the actual index as well as the SpriteEffects int tileIndex; SpriteEffects spriteEffects; TiledHelpers.DecodeTileID(tileID, out tileIndex, out spriteEffects); // figure out which tile set has this tile index in it and grab // the texture reference and source rectangle. ExternalReference<Texture2DContent> textureContent = null; Rectangle sourceRect = new Rectangle(); // iterate all the tile sets foreach (var tileSet in input.TileSets) { // if our tile index is in this set if (tileIndex - tileSet.FirstId < tileSet.Tiles.Count) { // store the texture content and source rectangle textureContent = tileSet.Texture; sourceRect = tileSet.Tiles[(int)(tileIndex - tileSet.FirstId)].Source; // and break out of the foreach loop break; } } // now insert the tile into our output outLayer.Tiles[i] = new DemoMapTileContent { Texture = textureContent, SourceRectangle = sourceRect, SpriteEffects = spriteEffects }; } // add the layer to our output output.Layers.Add(outLayer); } } // return the output object. because we have ContentSerializerRuntimeType attributes on our // objects, we don't need a ContentTypeWriter and can just use the automatic serialization. return output; } } } TiledLib contains a large amount of files including MapContent.cs using System; using System.Collections.Generic; using System.Globalization; using System.Xml; using Microsoft.Xna.Framework.Content.Pipeline; namespace TiledLib { public enum Orientation : byte { Orthogonal, Isometric, } public class MapContent { public string Filename; public string Directory; public string Version = string.Empty; public Orientation Orientation; public int Width; public int Height; public int TileWidth; public int TileHeight; public PropertyCollection Properties = new PropertyCollection(); public List<TileSetContent> TileSets = new List<TileSetContent>(); public List<LayerContent> Layers = new List<LayerContent>(); public MapContent(XmlDocument document, ContentImporterContext context) { XmlNode mapNode = document["map"]; Version = mapNode.Attributes["version"].Value; Orientation = (Orientation)Enum.Parse(typeof(Orientation), mapNode.Attributes["orientation"].Value, true); Width = int.Parse(mapNode.Attributes["width"].Value, CultureInfo.InvariantCulture); Height = int.Parse(mapNode.Attributes["height"].Value, CultureInfo.InvariantCulture); TileWidth = int.Parse(mapNode.Attributes["tilewidth"].Value, CultureInfo.InvariantCulture); TileHeight = int.Parse(mapNode.Attributes["tileheight"].Value, CultureInfo.InvariantCulture); XmlNode propertiesNode = document.SelectSingleNode("map/properties"); if (propertiesNode != null) { Properties = new PropertyCollection(propertiesNode, context); } foreach (XmlNode tileSet in document.SelectNodes("map/tileset")) { if (tileSet.Attributes["source"] != null) { TileSets.Add(new ExternalTileSetContent(tileSet, context)); } else { TileSets.Add(new TileSetContent(tileSet, context)); } } foreach (XmlNode layerNode in document.SelectNodes("map/layer|map/objectgroup")) { LayerContent layerContent; if (layerNode.Name == "layer") { layerContent = new TileLayerContent(layerNode, context); } else if (layerNode.Name == "objectgroup") { layerContent = new MapObjectLayerContent(layerNode, context); } else { throw new Exception("Unknown layer name: " + layerNode.Name); } // Layer names need to be unique for our lookup system, but Tiled // doesn't require unique names. string layerName = layerContent.Name; int duplicateCount = 2; // if a layer already has the same name... if (Layers.Find(l => l.Name == layerName) != null) { // figure out a layer name that does work do { layerName = string.Format("{0}{1}", layerContent.Name, duplicateCount); duplicateCount++; } while (Layers.Find(l => l.Name == layerName) != null); // log a warning for the user to see context.Logger.LogWarning(string.Empty, new ContentIdentity(), "Renaming layer \"{1}\" to \"{2}\" to make a unique name.", layerContent.Type, layerContent.Name, layerName); // save that name layerContent.Name = layerName; } Layers.Add(layerContent); } } } } I'm lost as to why this is failing. Thoughts? -- EDIT -- After playing with it a bit, I would think it has something to do with referencing the projects. I'm already referencing the TiledLib within my main windows project (TiledTest). However, this doesn't seem to make a difference. I can place the dll generated from the TiledLib project into the debug folder of TiledTest, and this causes it to generate a different error: Error loading "desert". Cannot find ContentTypeReader for Microsoft.Xna.Framework.Content.Pipeline.ExternalReference`1[Microsoft.Xna.Framework.Content.Pipeline.Graphics.Texture2DContent]. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper..ctor(ContentTypeReaderManager manager, FieldInfo fieldInfo, PropertyInfo propertyInfo, Type memberType, Boolean canWrite) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper.TryCreate(ContentTypeReaderManager manager, Type declaringType, FieldInfo fieldInfo) at Microsoft.Xna.Framework.Content.ReflectiveReader1.Initialize(ContentTypeReaderManager manager) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 This is all incredibly frustrating as the demo doesn't appear to have any special linking properties. The TiledLib I am utilizing is from Nick Gravelyn, and can be found here: https://bitbucket.org/nickgravelyn/tiledlib. The demo it comes with works fine, and yet in recreating I always run into this error.

    Read the article

  • How to solve "Broken Pipe" error when using awk with head

    - by Jon
    I'm getting broken pipe errors from a command that does something like: ls -tr1 /a/path | awk -F '\n' -vpath=/prepend/path/ '{print path$1}' | head -n 50 Essentially I want to list (with absolute path) the oldest X files in a directory. What seems to happen is that the output is correct (I get 50 file paths output) but that when head has output the 50 files it closes stdin causing awk to throw a broken pipe error as it is still outputting more rows.

    Read the article

  • iptables port forwarding works only for localhost

    - by Venki
    Below is my iptables config. I used this for my accessing a node js website running in port 9000 through port 80. This works fine only if access the website through local host / loop back. When I try to use the ip of eth0, which is assigned by my router through dcp. this does not work, when I use ip like 192.168.0.103 to access the website. I am not able to figure what is wrong here, Already burnt a day in this, still not able to figure out :( Edit: ( more information) Earlier, I was using this configuration to develop the website, i had configured the domain name to point to 127.0.0.1 in the /etc/hosts file. It was working fine, but now I am trying to deploy the website in a vps with static ip, This configuration does not work with both static IP. # redirect port 80 to port 9000 *nat :PREROUTING ACCEPT [57:3896] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4229:289686] :POSTROUTING ACCEPT [4239:290286] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 -A OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 COMMIT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT -A INPUT -p tcp --dport 9000 -j ACCEPT -A INPUT -j REJECT

    Read the article

  • How to make gpg2 to flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 < /tmp/qqq You need a passphrase to unlock the secret key for user: "Vitaly Shukela (_Vi) " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >