Search Results

Search found 18363 results on 735 pages for 'external ip'.

Page 159/735 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • External config file to be used by multiple DLLs.

    - by vikp
    Hi, I have multiple DLLs that are used to read/write data into my database. There is a presentation layer DLL and a data access layer DLL. I want these DLLs to share a set of the connection strings. My idea is to store the connection string in a seperate DLL in the external configuration file. I'm not sure whether it's a good idea and whether I can reference that external DLL in both presentation and data access layers. The other question is whether I should write a helper class to read the data from the external config file or whether I should be using built in .Net methods? Thank you

    Read the article

  • Is the IP from the source or target in this System.Net.Sockets.SocketException?

    - by Jeremy Mullin
    I'm making an outbound connection using a DNS name to a server other than the localhost, and I get this exception: System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:5555 The text implies that the TARGET machine refused the connection, but the IP address and port are from the localhost, which is kind of confusing. So is that IP address really the outgoing IP and port, even though the exception was caused by the target refusing the connection? Or is the exception from the local firewall blocking the outgoing connection?

    Read the article

  • Adding relative path for an external graphic in a XSL document?

    - by Pran
    Hi, first of, I don't know much about XSL. I am using a app called DITA to generate pdfs. One of the things it requires is an overwrite of an xsl file; to add custom styling. I am trying to add an external graphic using a relative path. It doesn't work, unless I supply the full path. Does not work: <fo:block text-align="center" width="100%"> <fo:external-graphic src="../../images/logo.png"/> </fo:block> Does work: <fo:block text-align="center" width="100%"> <fo:external-graphic src="/absolute/path/to/images/logo.png"/> </fo:block> I looked on the web, it said to use "file:image.png" and other website said to use "url(image.png)", but neither worked. What am I doing wrong?

    Read the article

  • Best option for Google App Engine Datastore and external database?

    - by Alex
    I need to get an App Engine app talking to and sharing data with an external database, The best option i can come up with is outputting the external database data to an xml file and then processing this in my app engine app and storing it inside the datastore, although the data being shared is sensitive data such as login details so outputting this to an xml file is not exactly a great idea, is it possible for the app engine app to directly query the database? or is there a secure option for using xml files? oh and im using python/django and the external database will be hosted on another domain

    Read the article

  • How to get HTML from external contents with Jquery?

    - by André
    Hi, I have successfully achieved the task of getting an internal webpage to show on my HTML, like I show bellow. click here to see teste1.html $('#target').click(function() { $.get('teste1.html', function(data) { $('#result').html(data); alert('Load was performed.'); }); }); The main goal is to get an external webpage, but this is not working: $.get('http://www.google.com/index.html', function(data) { $('#result').html(data); alert('Load was performed.'); }); I need to get the HTML from an external webpage to read the tags and then output to Json. Some clues on how to achieve this(load external webpages with Jquery, or should I do it with PHP)? Best Regards.

    Read the article

  • Which database engines support IP addresses as a native type?

    - by Matt McClellan
    I'm trying to find databases with support for IP addresses as a native type (as opposed to storing as a string, or an unsigned integer, which at least one commenter has already pointed out won't work for IPv6). The primary reason I'm looking for this is ease of development. For example, sorting on a "native" IP address column would be correct (as opposed to when it's stored as a string). I would assume support for such a type would also include useful operations such as determining if an IP address is inside a specified network for use in WHERE clauses. The only one I'm aware of so far is PostgreSQL with its inet class. Does anyone have any others?

    Read the article

  • Combing a symlink to an external folder with a Rewrite Rule?

    - by Tristan
    I've created a symlink in an account to an folder external to that user account (although with the same ownership). The symlink works but I'd like to combine it with a RewriteRule, and I'm having problems with that. For instance I create the symlink with: ln -s /home/target shortcut And I add the following RewriteRule to .htaccess: RewriteRule ^shortcut/([a-zA-Z0-9_-]+) shortcut/index.php?var=$1 This however fails. Yet if instead of being located in an external folder, the target folder is in the same folder as the shortcut address, then the RewriteRule will work. i.e. it works if the symlink is: ln -s ./target shortcut How might I get the RewriteRule working for the case where the target folder is an external folder?

    Read the article

  • Interview Questions that can be asked on TCP/IP, UDP, Socket Programming ?

    - by Shantanu Gupta
    I am going for an interview day after tomorrow where i will be asked vaious questions related to TCP/IP and UDP. As of now i have prepared theoritical knowledge about it. But now I am looking up for gaining some practicle knowledge related to how it works in a network. What all is going in vaious .NET classes. I want to create a very small application like a chat or something that can make me all these concepts very much clear. Could you please suggest some questions related to TCP/IP that you generally ask or that you might have faced. How communication is going from server to client. Right now I am studying TcpClient, TcpListener and UdpClient Class but I want to implement all of them so as to get aware about its working. Is Chat application a Tcp/IP application ? I would appreciate your help.

    Read the article

  • How to use External Procedures in Triggers on Oracle 11g..

    - by RBA
    Hi, I want to fire a trigger whenever an insert command is fired.. The trigger will access a pl/sql file which can change anytime.. So the query is, if we design the trigger, how can we make sure this dynamic thing happens.. As during the stored procedure, it is not workingg.. I think - it should work for 1) External Procedures 2) Execute Statement Please correct me, if I am wrong.. I was working on External Procedures but i am not able to find the way to execute the external procedure from here on.. SQL> CREATE OR REPLACE FUNCTION Plstojavafac_func (N NUMBER) RETURN NUMBER AS 2 LANGUAGE JAVA 3 NAME 'Factorial.J_calcFactorial(int) return int'; 4 / @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ SQL> CREATE OR REPLACE TRIGGER student_after_insert 2 AFTER INSERT 3 ON student 4 FOR EACH ROW How to call the procedure from heree... And does my interpretations are right,, plz suggest.. Thanks.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • External usb 3.0 hard drive is not recognised when plugged into usb 3 port (ubuntu natty 64 bit).

    - by kimangroo
    I have an Iomega Prestige Portable External Hard Drive 1TB USB 3.0. It works fine on windows 7 as a usb 3.0 drive. It isn't detected on ubuntu natty 64bit, 2.6.38-8-generic. fdisk -l cannot see it at all: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1bed746b Device Boot Start End Blocks Id System /dev/sda1 1 1689 13560832 27 Unknown /dev/sda2 * 1689 1702 102400 7 HPFS/NTFS /dev/sda3 1702 19978 146805760 7 HPFS/NTFS /dev/sda4 19978 60802 327914497 5 Extended /dev/sda5 25555 60802 283120640 7 HPFS/NTFS /dev/sda6 19978 23909 31571968 83 Linux /dev/sda7 23909 25555 13218816 82 Linux swap / Solaris Partition table entries are not in disk order lsusb can see it: Bus 003 Device 003: ID 059b:0070 Iomega Corp. Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 004: ID 05fe:0011 Chic Technology Corp. Browser Mouse Bus 002 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 005: ID 0489:e00f Foxconn / Hon Hai Bus 001 Device 004: ID 0c45:64b5 Microdia Bus 001 Device 003: ID 08ff:168f AuthenTec, Inc. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub And dmesg | grep -i xhci (I may have unplugged the drive and plugged it back in again after booting): [ 1.659060] pci 0000:04:00.0: xHCI HW did not halt within 2000 usec status = 0x0 [ 11.484971] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 11.484997] xhci_hcd 0000:04:00.0: setting latency timer to 64 [ 11.485002] xhci_hcd 0000:04:00.0: xHCI Host Controller [ 11.485064] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 [ 11.636149] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 [ 11.636241] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X [ 11.636246] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X [ 11.636251] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X [ 11.636256] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X [ 11.636261] xhci_hcd 0000:04:00.0: irq 47 for MSI/MSI-X [ 11.639654] xHCI xhci_add_endpoint called for root hub [ 11.639655] xHCI xhci_check_bandwidth called for root hub [ 11.956366] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 2 [ 12.001073] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.007059] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.012932] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.018922] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.049139] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.056754] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.131607] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 12.179717] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.686876] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 12.687058] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 12.687152] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 43.330737] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 2 [ 43.422579] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 43.422658] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af00 [ 43.422665] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af40 [ 43.422671] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af80 [ 43.422677] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669afc0 [ 43.531159] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 125.160248] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.766466] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 3 [ 903.807789] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.813530] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.819400] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.825104] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.855067] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862314] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862597] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.913211] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 904.424416] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 904.424599] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 904.424700] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 935.139021] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 3 [ 935.226075] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 935.226140] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b00 [ 935.226148] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b40 [ 935.226153] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b80 [ 935.226159] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186bc0 [ 935.343339] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst I thought it might be that the firmware wasn't compatible with linux or something, but when booting a live image of partedmagic, (2.6.38.4-pmagic), the drive was detected fine, I could mount it and got usb 3.0 speeds (at least they double the speeds I got from plugging same drive in usb 2 ports). dmesg in partedmagic did say something about no SuperSpeed endpoint which was an error I saw in a previous dmesg of ubuntu: Jun 27 15:49:02 (none) user.info kernel: [ 2.978743] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 Jun 27 15:49:02 (none) user.debug kernel: [ 2.978771] xhci_hcd 0000:04:00.0: setting latency timer to 64 Jun 27 15:49:02 (none) user.info kernel: [ 2.978781] xhci_hcd 0000:04:00.0: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 2.978856] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 Jun 27 15:49:02 (none) user.info kernel: [ 3.089458] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 Jun 27 15:49:02 (none) user.debug kernel: [ 3.089541] xhci_hcd 0000:04:00.0: irq 42 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089544] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089546] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089548] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089550] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X Jun 27 15:49:02 (none) user.warn kernel: [ 3.092857] usb usb3: No SuperSpeed endpoint companion for config 1 interface 0 altsetting 0 ep 129: using minimum values Jun 27 15:49:02 (none) user.info kernel: [ 3.092864] usb usb3: New USB device found, idVendor=1d6b, idProduct=0003 Jun 27 15:49:02 (none) user.info kernel: [ 3.092866] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 27 15:49:02 (none) user.info kernel: [ 3.092867] usb usb3: Product: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 3.092869] usb usb3: Manufacturer: Linux 2.6.38.4-pmagic xhci_hcd Jun 27 15:49:02 (none) user.info kernel: [ 3.092870] usb usb3: SerialNumber: 0000:04:00.0 Jun 27 15:49:02 (none) user.debug kernel: [ 3.092961] xHCI xhci_add_endpoint called for root hub Jun 27 15:49:02 (none) user.debug kernel: [ 3.092963] xHCI xhci_check_bandwidth called for root hub Well I have no idea what's going wrong, and I haven't had much luck from google and the forums so far. A number of unanswered threads with people with similar error messages and problems only. Hopefully someone here can help or point me in the right direction?!

    Read the article

  • nginx problem accessing virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • Modify registry for Internet Connection Sharing?

    - by Tim
    My OS is Windows XP. Quoted from How to Change the IP Range for the Internet Connection Sharing DHCP service Use Registry Editor to modify the data value of the IntranetInfo value in the following registry key: Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing\Settings\General The first number listed is the IP address of the internal IP address of the Connection Sharing host. The second number is the subnet IP address separated by a comma. Enter the first IP address of the new range followed by the subnet mask, separated by a comma. (For example, 169.254.0.1,255.255.0.0.). Modify the data value of the Start value in the following registry key: Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing\Addressing\Settings Change the value to the second address of the selected IP range. This address cannot be the same or a lower value than the IP address used for the IntranetInfo key. Modify the data value for the Stop value in the same registry key. Enter the last the IP address of the selected IP range. My registry table does not have Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing, and I don't know how to do with my registry table following the above three steps. Can someone guide me through it step by step?

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • missing network usage through iptables

    - by Purres
    I inserted a rule to iptables to track the input usage to a certain ip address. The vps server's IP is 192.168.1.5 and the guest os's IP is 192.168.1.115. I ran 'yum update' inside the guest OS to get some network traffic. Then I ran iptables -vnL from the hypervisor. However it only showed network usage to the host, but not to the guest. Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target source destination 0 0 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.115-192.168.1.115 1853 114K 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.5-192.168.1.5 I ran tcpdump and the log showed that there're data packets to the guest os. 16:17:43.932514 IP mirrordenver.fdcservers.net.http > 192.168.1.115.34471: Flags [.], seq 17694667:17696115, ack 1345, win 113, options [nop,nop,TS val 1060308643 ecr 1958781], length 1448 16:17:43.932559 IP 192.168.1.115.34471 > mirrordenver.fdcservers.net.http: Flags [.], ack 17696115, win 15287, options [nop,nop,TS val 1958869 ecr 1060308643], length 0 Why the guest OS network usage couldn't be tracked? iptables -L will return the INPUT chain as following: Chain INPUT (policy ACCEPT) target prot opt source destination all -- anywhere anywhere destination IP range 192.168.1.115-192.168.1.115 all -- anywhere anywhere destination IP range 192.168.1.5-192.168.1.5 all -- anywhere anywhere

    Read the article

  • Bash Script - Traffic Shaping

    - by Craig-Aaron
    hey all, I was wondering if you could have a look at my script and help me add a few things to it, How do I get it to find how many active ethernet ports I have? and how do I filter more than 1 ethernet port How I get this to do a range of IP address? Once I have a few ethenet ports I need to add traffic control to each one #!/bin/bash # Name of the traffic control command. TC=/sbin/tc # The network interface we're planning on limiting bandwidth. IF=eth0 # Network card interface # Download limit (in mega bits) DNLD=10mbit # DOWNLOAD Limit # Upload limit (in mega bits) UPLD=1mbit # UPLOAD Limit # IP address range of the machine we are controlling IP=192.168.0.1 # Host IP # Filter options for limiting the intended interface. U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() { # Hierarchical Token Bucket (HTB) to shape bandwidth $TC qdisc add dev $IF root handle 1: htb default 30 #Creates the root schedlar $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD #Creates a child schedlar to shape download $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD #Creates a child schedlar to shape upload $U32 match ip dst $IP/24 flowid 1:1 #Filter to match the interface, limit download speed $U32 match ip src $IP/24 flowid 1:2 #Filter to match the interface, limit upload speed } stop() { # Stop the bandwidth shaping. $TC qdisc del dev $IF root } restart() { # Self-explanatory. stop sleep 1 start } show() { # Display status of traffic control status. $TC -s qdisc ls dev $IF } case "$1" in start) echo -n "Starting bandwidth shaping: " start echo "done" ;; stop) echo -n "Stopping bandwidth shaping: " stop echo "done" ;; restart) echo -n "Restarting bandwidth shaping: " restart echo "done" ;; show) echo "Bandwidth shaping status for $IF:" show echo "" ;; *) pwd=$(pwd) echo "Usage: tc.bash {start|stop|restart|show}" ;; esac exit 0 thanks

    Read the article

  • How to force certain traffic through GRE tunnel?

    - by wew
    Here's what I do. Server (public internet is 222.x.x.x): echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf sysctl -p iptunnel add gre1 mode gre local 222.x.x.x remote 115.x.x.x ttl 255 ip add add 192.168.168.1/30 dev gre1 ip link set gre1 up iptables -t nat -A POSTROUTING -s 192.168.168.0/30 -j SNAT --to-source 222.x.x.x iptables -t nat -A PREROUTING -d 222.x.x.x -j DNAT --to-destination 192.168.168.2 Client (public internet is 115.x.x.x): iptunnel add gre1 mode gre local 115.x.x.x remote 222.x.x.x ttl 255 ip add add 192.168.168.2/30 dev gre1 ip link set gre1 up echo '100 tunnel' >> /etc/iproute2/rt_tables ip rule add from 192.168.168.0/30 table tunnel ip route add default via 192.168.168.1 table tunnel Until here, all seems going right. But then 1st question, how to use GRE tunnel as a default route? Client computer is still using 115.x.x.x interface as default. 2nd question, how to force only ICMP traffic to go through tunnel, and everything else go default interface? I try doing this in client computer: ip rule add fwmark 200 table tunnel iptables -t mangle -A OUTPUT -p udp -j MARK --set-mark 200 But after doing this, my ping program will timeout (if I not doing 2 command above, and using ping -I gre1 ip instead, it will works). Later I want to do something else also, like only UDP port 53 through tunnel, etc. 3rd question, in client computer, I force one mysql program to listen on gre1 interface 192.168.168.2. In client computer, there's also one more public interface (IP 114.x.x.x)... How to forward traffic properly using iptables and route so mysql also respond a request coming from this 114.x.x.x public interface?

    Read the article

  • Configuring DNS & MX records for exchange 2010

    - by Mahmoud Saleh
    i am trying to configure Exchange Server 2010 on Windows Server 2008 R2 to receive emails from the internet following the danscourses tutorials: and i followed this video for the DNS & MX records: http://www.youtube.com/watch?v=jdf_3DRssks i don't have any windows administration skills, and i am stuck with the DNS configuration. and the following are my domain configuration i got from the hosting. following are the steps i made: 1- Add new name server: add ns1.centors.com ip Exchange Server Public IP: 41.233.26.131 2- Change the A record change it to point to the public ip address Exchange Server Public IP: 41.233.26.131 3- New cname record for www and make it resolve to centors.com 4- New mx record for mail.centors.com 5- New A record for mail.centors.com: name: mail ip: Exchange Server Public IP: 41.233.26.131 6- new A record for ns1: ip: Exchange Server Public IP: 41.233.26.131 7- i made port forward in the router for SMTP and POP3 to the exchange server local ip address. ISSUE: i have a user account in the active directory, and the user is member of the domain, the user is [email protected] and when trying to login with this account in outlook 2010 on other machine using following data: account type: POP3 incoming mail server: mail.centors.com outgoing mail server: mail.centors.com i always get the error: Authorization failed, check your server settings. please advise what's wrong with the configuration, thanks in advance.

    Read the article

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

  • Connection established to google DNS, can't resolve any hosts

    - by Tar
    As you can see from the picture above, I am connected to google DNS but am unable to resolve any hostnames. When I try to ping sites like google.com, yahoo.com, etc, I get 'ping: unknown host'. Yes, I am able to ping localhost, I am able to ping hostname.domain.com, but not domain.com. I can't ping my nameservers. I can ping all hosts by IP address and that works. The output of my /etc/resolv.conf: nameserver 8.8.8.8 nameserver 8.8.4.4 Anyone know what the problem could be? 23:30:04.304955 IP my_server.44457 > 8.8.8.8.domain: 28349+ A? google.com. (28) 23:30:06.137985 IP 112.100.0.78.19781 > my_server.domain: 18717 [1au] A? www.my_domain.com. (46) 23:30:06.138286 IP my_server.domain > 112.100.0.78.19781: 18717*- 2/0/1 CNAME my_domain.com., A my_server (76) 23:30:06.686582 IP 112.100.0.74.19181 > my_server.domain: 65046 [1au] A? my_domain.com. (42) 23:30:06.686811 IP my_server.domain > 112.100.0.74.19181: 65046*- 1/0/1 A my_server (58) 23:30:07.043764 IP my_server.50465 > 4.2.2.1.domain: 13865+ PTR? 142.254.22.67.in-addr.arpa. (44) 23:30:09.065904 IP my_server.45242 > 8.8.4.4.domain: 29011+ PTR? 123.72.117.130.in-addr.arpa. (45) 23:30:09.310021 IP my_server.45440 > 8.8.4.4.domain: 28349+ A? google.com. (28)

    Read the article

  • DL380 G7: Not able to access ILO on DL380 via ssh from a client

    - by user117140
    I have problem where I can't access my ILO(ssh to ILO IP) thru client which is in different network.I am able to ping ILO IP thru this clinet but ssh access is not possible. Is it possible to have ssh to ILO IP from a client which is in different network? FYI, from the same client I can do ssh to server application IP but ssh to this server ILO IP is not possible. Kindly help? Some more info added: ILO IP address is 10.247.172.70 and its VLAN is different than Client VLAN. Client IP address is 10.247.167.80. ping to ILO IP from this client is possible but not ssh. I can do ssh to ILO IP if I try to do it from the server(hostname:node1) having ILO port or from the other node of this cluster itself,So ssh login is enabled. [root@node1 ~]$ssh -v 10.247.173.70 OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 10.247.173.70 [10.247.173.70] port 22. [root@node1 ~]$ping 10.247.173.70 PING 10.247.173.70 (10.247.173.70) 56(84) bytes of data. 64 bytes from 10.247.173.70: icmp_seq=1 ttl=254 time=0.283 ms 64 bytes from 10.247.173.70: icmp_seq=2 ttl=254 time=0.344 ms 64 bytes from 10.247.173.70: icmp_seq=3 ttl=254 time=0.324 ms 64 bytes from 10.247.173.70: icmp_seq=4 ttl=254 time=0.367 ms

    Read the article

  • ospfd over an OpenVPN link - strange error in logs

    - by Alex
    I am trying to set up Quagga ospfd on two hosts connected by an OpenVPN link. These hosts have VPN IPs 10.31.0.1 and 10.31.0.13. ospfd config is pretty simple: hostname bizon password xxxxxxxxx enable password xxxxxxxxx ! log file /var/log/quagga/ospfd.log ! interface lo ! interface tun0 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun1 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun2 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 ! router ospf ospf router-id 10.31.0.1 network 10.31.0.0/16 area 0.0.0.0 network 10.119.2.0/24 area 0.0.0.0 redistribute connected area 0.0.0.0 range 10.0.0.0/8 ! line vty ! debug ospf event debug ospf packet all I am getting the following error in the ospfd.log (the log is from 10.31.0.13): 2012/10/05 01:25:28 OSPF: ip_v 4 2012/10/05 01:25:28 OSPF: ip_hl 5 2012/10/05 01:25:28 OSPF: ip_tos 192 2012/10/05 01:25:28 OSPF: ip_len 64 2012/10/05 01:25:28 OSPF: ip_id 64666 2012/10/05 01:25:28 OSPF: ip_off 0 2012/10/05 01:25:28 OSPF: ip_ttl 1 2012/10/05 01:25:28 OSPF: ip_p 89 2012/10/05 01:25:28 OSPF: ip_sum 0xe5d1 2012/10/05 01:25:28 OSPF: ip_src 10.31.0.1 2012/10/05 01:25:28 OSPF: ip_dst 224.0.0.5 2012/10/05 01:25:28 OSPF: Packet from [10.31.0.1] received on link tun1 but no ospf_interface I'm not sure what to do next. I have set up ospfd over OpenVPN several times but I used Debian and I am on CentOS 6 now. Quagga version is 0.99.15. Should I try to get more recent version?

    Read the article

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • Setting up virtualbox for outside access

    - by Morgan Green
    I have a computer running a server that my subdomain on my shared hosting account points to. IE subdomain.mydomain.org goes to my home server. Now then; what I'm wanting to do is be able to access my VirtualBox servers through that subdomain and a different port. E.G Ubuntu Virtual Box Server 1 Username:Ubuntuhost1 Password:MyUbuntuHost1 Port:4000 Internal IP: 192.168.1.60 External IP: 24.29.138.45 Ubuntu Virtual Box Server 2 Username:UbuntuHost2 Password:MyUbuntuHost2 Port:4001 Internal IP: 192.168.1.61 External IP: 24.29.138.45 Now I want to be able to access RDP number 1 through Port 4000, but if I access Port 4001 it will connect to the server on port 4001; both using the same subdomain. The next issue is the fact that even though I know what the IP addresses are on the router for the virtualbox hosts through ifconfig it doesn't change the fact that they don't show up on the router. If anyone knows how to configure this to work please help me out because I've been racking my brain to the highest extent I can. Alright; here's an edit to clarify more; Sorry. My ports on the router are edited to forward Port 4000 on Internal IP 192.168.1.63 (My Ubuntu Internal IP address) Now when I go to my Router Home Page my VirtualBox Internal IP Address doesn't show on the attached device listings, so I set up port forwarding anyways to the VirtualBox Internal IP. My end goal is when I connect to mydomain.org and I connect through port 3389 it takes me to my host computers server, but if I put in mydomain.org and go through port 4000 it's going to redirect to my VirtualBox server; Is this even possible? Sorry; I'm trying to clarify the most I think I can I just don't know how else to explain my issue.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >