Search Results

Search found 388 results on 16 pages for 'firewalls'.

Page 13/16 | < Previous Page | 9 10 11 12 13 14 15 16  | Next Page >

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I initially asked this on stackoverflow and now my eyes are now opened up to superuser. Thank you for any help you can provide.

    Read the article

  • Local Network - Windows 7 and Vista can't see each other

    - by ca8msm
    I've got a strange issue at home that has been bugging me for weeks, but I really need to get it sorted now so I'll detail as much as I can and hopefully someone can spot what might might be wrong. I have a wireless router connected to the internet and 3 devices connected to it. They are: Name OS Network IPv4 PC1 Windows 7 WORKGROUP 192.168.2.2 LAPTOP1 Vista WORKGROUP 192.168.2.3 PS3 192.168.2.4 and they all get their IP addresses dynamically. Both PC1 and LAPTOP1 can ping PS3 and get a response. PC1 and LAPTOP1 are unable to ping each other by ip address unless I ping by their name (which bizarrely shows that it is pinging via the IPv6 address). Also, to confirm this both PC1 and LAPTOP1 can ping each other via the long IPv6 address that they both have so they can obviously see each other just not via IPv4. I've disabled the firewalls on both machines as well to rule that out. I don't really know what IPv6 is used for and I've tried disabling it on both machines but all that happens then is that neither machine can see each other at all then. Does anyone have any idea of what may be stopping them seeing each other, any ways I can look at fixing this, or any network tools that may help identify where it is failing? Thanks, Mark

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Windows Server 2008 IIS Random disconnect

    - by d123
    I am having a bit of a quirk with my IIS server. I'm running my IIS with 2 sets of IPs configured, one in the 192 range and the other in 172 range. I then have multiple apps which will talk to this server for information. Server has no AV or firewalls configured. I noticed that my apps when talking to the server on the 172 range, at random intervals, the server would just not respond. My apps would then disconnect and just try again, and every thing would be fine. This doesn't happen on the 192 range. So what I did is on a Linux box I did a watch command and to wget a file every half second on the 172 and 192 IPs. I noticed the same issue, every once in awhile wget on the 172 range would not get through, but there is no issues at all on 192. Thus I went around to Wireshark and did a dump. This is the last 3 packets, no other packets were received. 7010 100.871877 200.100.30.7 172.0.0.1 TCP 59619 http [ACK] Seq=140 Ack=85242 Win=64128 Len=0 TSV=1072818795 TSER=1660246133 7011 100.872238 200.100.30.7 172.0.0.1 TCP 59619 http [FIN, ACK] Seq=140 Ack=85242 Win=64128 Len=0 TSV=1072818796 TSER=1660246133 7013 100.873081 200.100.30.7 172.0.0.1 TCP 59619 http [ACK] Seq=141 Ack=85243 Win=64128 Len=0 TSV=1072818796 TSER=1660246133 So this is my issue, there is a random disconnect every once in awhile. The server doesn't receive the next SYN packet. HELP?

    Read the article

  • Windows 2008 R2 DNS cant resolve own SOA

    - by user46742
    We have two Domain Controllers for our network. They both run DHCP, DNS, and ADS. They are both VM's sitting on MS Hyper V Server 2008 on separate physical hosts. We had our primary DC go down a week ago. I upgraded an already existing VM to Primary DC and built a new VM for the secondary. Both DNS servers are running and the SOA is configured correctly for Primary DC 1. However when I run the best practice analyzer it states the server cannot resolve it's own SOA. Check the configuration in the adapter. I checked and they are configured properly. I also went through the DNS entries thoroughly and made sure there was no records of the previous DC that went down. NSLOOKUP resolves the domain and primary dc fine. I also checked the firewalls on the machines and our physical firewall for any deny packets. Any suggestions? I appreciate any help!

    Read the article

  • MSMQ on Win2008 R2 won't receive messages from older clients

    - by Graffen
    I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • Join Domain from VM

    - by Adis
    I have two VMs running on VMWare Player. I use NAT adapter settings. The host machine for VMs is running on corporate network. First VM has Domain controller running and I can log in on that machine using domain credentials. I named domain wm.local When I run IP config on this machine: IP: 192.168.87.132 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 Second VM cannot join domain. When I try it with domain WM I'm propmted for credentials. And I enter Administrator credentials and than it waits for some time and I get response: "The specified domain either does not exist or could not be contacted" If i type wm.local as domain when trying to join it does not prompt me to login but just shows "An Active Directory Domain Controller (AD DC) for the domain wm.local could not be contacted. And here it takes no time to get this error message. Ipconfig on this machine: IP: 192.168.87.134 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 I can ping second VM from first one. And I disabled firewalls on both machines. Any ideas? Is there any manual for this?

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • PFSence VPN Routing

    - by SvrGuy
    We use PFSense firewalls at three installations with the following LAN networks: 1.) Datacenter #1: 10.0.0.0/16 2.) Datacenter #2: 10.1.0.0/16 3.) HQ: 10.2.0.0/16 All of these locations are linked via an IPSEC tunnel that works properly. Hosts in any of the above networks can communicate with hosts in any other of the above networks. Now, for our laptops etc. we established a road warrior network 10.3.0.0/16 and have implemented OpenVPN to link the laptops etc. to Datacenter #1. This works great too, so our laptops can connect and communicate with any host in Datacenter #1 (anything on 10.0.0.0/16) The problem is the laptops can't communicate with any hosts that Datacenter #1 can reach by its IPSEC tunnel to Datacenter #2 (and/or the HQ for that matter). Does anyone know what to do configuration wise on the PFSense box in Datacenter #1 to configure to route packets received on the OpenVPN tunnel to Datacenter #2 over the IPSEC tunnel? It could be a setting on the OpenVPN or some sort of static route or some such. Any ideas?

    Read the article

  • Thunderbird 15.0.1 cannot use Exchange 2003 SMTP

    - by speedreeder
    I'm having the strangest time getting a Thunderbird email client to connect to my Exchange 2003 server. I got the incoming IMAP account set up no problem, and I can receive mail. However sending mail will not work no matter what SMTP settings I enter. After checking the server, the proper settings should be port 25 with no authentication or connection security, which I have entered. I can ping the hostname of the server from the client machine in question. The Thunderbird error message I get is: "Sending of message failed. The message could not be sent because the connection to SMTP server -hostname omitted- was lost in the middle of the transaction." So I went to the server and double checked the settings for Exchange's SMTP stuff. I have it correct. I tried to telnet (on the server) to localhost 25. It appears to connect and then disconnect immediately, no message, no nothing. When I telnet to other ports (POP-110 for example) I get proper connection messages and a stable connection. There are no firewalls on either the client or the server. There's a firewall on the network but LAN-LAN traffic is unrestricted. I can reproduce the Thunderbird error on a second client, and I can't get any client to be able to telnet in. Anyone have any ideas?

    Read the article

  • Can't Ping - Wireless network of home

    - by Naunidh
    Hello, This may seem like other ping problem, but I have tried a lot before posting it here. I have a linksys WRT54G - firmware v8.00.8. I have two laptops one windows vista (192.168.1.99) and Windows Xp (192.168.1.13) connected on WiFi . The Router's IP address is 192.168.1.4, and default gateway is the ADSL modem (192.168.1.1) connected through wire. The problem is that laptops can not ping each other, they can ping the gateway and the linksys router, and both can access internet. Following has been tried (I am pinging from XP machine to Vista): I saw that arp entires for Vista machines were not being populated, so I added static ARP entries. 192.168.1.99 00-19-7e-70-d0-4e static I checked on ethereal that an ICMP packet for MAC address of Vista machine does go out from XP machine towards the Vista machine, but never reaches the Vista machine. So its get eaten by the Router? I added Vista machine to DMZ in my linksys router, so that all the ports are open (In case it was an issue). Firewalls , antivirus etc were turned off, echo was enabled explicitly on vista, file sharing, network discovery were turned on. Network type was set to private. Unchecked everything in Router;s firewall, even though they are only meant for WAN requests. Is there anything else that I should try. Thanks.

    Read the article

  • Summer daylight time not changing on some active directory domain clients.

    - by Nick Gorbikoff
    We just had a summer daylight change in US. and pc's on my network are behaving strange, some of them change time and some didn't. My network: 2 locations both in Midwest, same time zone. Location 1: 120 pcs (windows xp & windows 200) , with 1 Active Direcotry Domain Controller on Windows 2003 Standard. A couple of windows 2000 servers (they up to date) the rest of the servers are Xen or Debian machines (all up to date) , Second location connected through OpenVPN link all pc's are running fine - but they are all connecting to our AD domain controller. Locaiton 2: 10 pcs, and a shared LAN NAS. Both of the routers/firewalls in both locations are pFsense boxes with ntp service running - but it's up to date. Tried all the usual suspects: I have all the latest updates installed restarted them domain controller is running fine most computers are running fine I have only one domain controller on my network also my firewall serves as ntp server (pfsense) but it's up to date. all of the linux machines are fine since they are querying firewall / router for the time. about 1/3 of my pcs are 1 hour behind. If I change them manually they just change back ( the way domain pc's are supposed to). I've tried everything but I can't think of anything else to try.

    Read the article

  • How to securely control access to a backend key server?

    - by andy
    I need to securely encrypt data in my database so that if the database is dumped, hackers are unable to decrypt the data. I'm planning on creating a simple key server on a different machine, and allowing the DB server access to it (restricted by IP address on the key server to permit the DB server). The key server would contain the key required to encrypt/decrypt data. However, if a hacker were able to get a shell on the DB server, they could request the key from the key server and therefore decrypt the data in the database. How could I prevent this (assuming all firewalls are in place, DB is not connected directly to the internet, etc)? i.e. is there some method I could use that could secure a request from the DB server to the key server so that even if a hacker had a shell on the DB server they'd be unable to make those same requests? Signed requests from the DB server could make issuing these requests less trivial - I suppose that'd help increase the amount of time it'd take to compromise the key server, something a hacker probably wouldn't have much of. As far as I can see, if someone can get a shell on the DB server everything's lost anyway. This could be mitigated by using one key per data item in the DB so at least there's not a single "master" key, but multiple keys that the hacker would need to access. What would be a secure method of ensuring requests from the DB server to the key server were authentic and could be trusted?

    Read the article

  • IIS not starting: The process cannot access the file because it is being used by another process

    - by Rick Strahl
    Ok, apparently a few people knew about this issue, but it is new to me and has caused me nearly an hour to track down today. What happened is that I’ve been working all day doing some final pre-deployment testing of several tools on my local dev machine. In the process I’ve been starting and stopping several IIS 7 Web sites. At some point I was done and just wanted to start my Default Web Site again and found this  little gem of an error message popping up: The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) A lot of headless running around ensued after this, trying to figure out why IIS wouldn’t start. Oddly some sites started right up, others didn’t. I killed INetInfo, all worker processes, tried IISReset a million times and even rebooted – all to no avail. What gives? Skype, you evil Bastard! As it turns out the culprit is – drum roll please - Skype!  What, you may ask, does Skype have to do with IIS and Web Requests? It looks like recent versions of Skype have an option to run over Port 80 and 443 to allow running over corporate firewalls. Which is actually a nice feature that lets Skype work just about anywhere. What’s not so cool is that IIS fails to start up when another application is already using the same port that a Web site is mapped to. In the case of my dev site that’d be port 80 and Skype was hogging it. To fix this issue you can stop Skype from using port 80 and 443 which quickly fixes the problem. Or stop Skype. Duh! To permanently fix the problem in Skype find the option on the Options | Connection tab and uncheck the Use port 80/443 option: Oddly I haven’t run into this problem even though my setup hasn’t changed in quite some time. It appears that it’s bad startup timing that causes this problem to occur. Whatever the circumstance was, Skype somehow ended up starting before IIS.  If Skype is started after IIS has started it will automatically opt for other ports and not use port 80 and so there’s no problem. It’s easy to demonstrate this behavior if you’re looking for it: Stop IIS Stop Skype Start Skype and make a test call Start IIS And voila your error is ready for you! This really shouldn’t be a problem except that it would be really nice if IIS could give a more helpful error message when it can fire up a site because a port is blocked. “The process cannot access a file” is really not a very helpful error message in this scenario… I/O port / file ah what the heck it’s all the same to Windows. Right! I’ve run into this situation quite a bit with other, albeit more obvious applications like running Apache on the local machine for testing and then trying to run an IIS application. Same situation,  although it’s been a while – pre IIS 7 and I think previous versions of IIS actually gave more useful error messages for port blockages and that would be helpful. On the way to figuring this out I ran into some pretty humorous forum posts though with people ragging on why the hell you would be running IIS. Or Skype. The misinformed paranoia police out in full force so to say :-). It’ll be nice to start running IIS Express once Visual Studio 2010 SP1 gets released. Anyway, no surprise that Skype didn’t jump out at me as the culprit right away and I was left fumbling for a while until the Internet came to the rescue. I’m not the first to have found this for sure – I posted a message on Twitter and dozens of people replied they’d run into this before as well. Seems worth mentioning again though – since I’m sure to forget that this happened in a year from now when I hit that same error. Maybe I’ll even find this blog post to remind me…© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Desktop Applications Versus Web Applications

    Up until the advent of the internet programmers really only developed one type of application used by end-users.  This type of application was called a desktop application. As the name implies, these applications ran strictly from a desktop computer, and were limited by the resources available to the computer. Initially, this type of applications did not need resources outside of the scope of the computer in which they installed. The problem with this type of application is that if multiple end-users need to access the same desktop application, then the application must be installed on the end-user’s computer. In this age of software development security was not as big of a concern as it is today with other types of applications. This is primarily due to the fact that an end-user must have access to the computer where the software is installed in order for them to access the application. In addition, developers could also password protect the application just in case an authorized end-user was able to gain access to the computer. With the birth of the internet a second form of application emerged because developers were trying to solve inherent issues with the preexisting desktop application. One of the solutions to overcome some of the short comings of desktop applications is the web application. Web applications are hosted on a centralized server and clients only need to have network access and a web browser in order to access the application. Because a web application can be installed on a remote server it removes the need for individual installations of the same application on each end-user’s computer.  The main benefits to an application being hosted on a server is increased accessibility to the application due to the fact that nothing has to be installed on a desktop computer for an end-user to be able to access the application. In addition, web applications are much easier to maintain because any change to the application is applied on the server and is inherently applied to any end-user trying to use the application. This removes the time needed to install and maintain individual installations of a desktop application. However with the increased accessibility there are additional costs that are incurred compared to a desktop application because of the additional cost and maintenance of a server hosting the application. Typically, after a desktop application is purchased there are no additional reoccurring fees associated with the application.  When developing a web based application there are additional considerations that must be addressed compared to a desktop application. The added benefit of increased accessibility also now adds a new failure point when trying to gain access to an application. An end-user now must have network connectivity in order to access the application. This issue is not a concern for desktop applications because there resources are typically bound to the computer in which they run. Since the availability of an application is increased with the use of the client-server model in a web based application, additional security concerns now come in to play. As stated before a, desktop application is bound to the accessibility of the end-user to the computer that the application is installed. This is not the case with web based applications because they potentially could have access from anywhere with the proper internet/network connection. Additional security steps are required to insure the integrity of the application and its data. Examples of these steps include and are not limited to the following: Restricted/Password Areas This form of security is used when specific information can only be accessed by end-users based on a set of accessibility rules. IP Restrictions This form of security is used when only specific locations need to access an application. This form of security is applied from within the web server or a firewall. Network Restrictions (Firewalls) This form of security is used to contain access to an application within a specific sub set of a network. Data Encryption This form of security is used transform personally identifiable information in to something unreadable so that it can be stored for future use. Encrypted Protocols (HTTPS) This form of security is used to prevent others from reading messages being sent between applications over a network.

    Read the article

  • Connection Error:Oracle.DataAccess.Client.OracleException ORA-12170

    - by psyb0rg
    This has taken many hours of mine. I have to get this .Net app to run on an XP system. Someone seems to have messed up some files so conn.Open() in the C# is causing this error: Connection Error:Oracle.DataAccess.Client.OracleException ORA-12170: TNS:Connect timeout occurred at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure) at Oracle.DataAccess.Client.OracleException.HandleError(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, Object src) at Oracle.DataAccess.Client.OracleConnection.Open() at Service.connect(Int32 sql_choice, String databaseIdentifier, String authenticationKey) in c:\Documents .... This is my sqlnet.ora file: # sqlnet.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\sqlnet.ora SQLNET.AUTHENTICATION_SERVICES= (NTS) NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) SQLNET.INBOUND_CONNECT_TIMEOUT = 180 SQLNET.SEND_TIMEOUT = 180 SQLNET.RECV_TIMEOUT = 180 This is tnsnames.ora: # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORACLE2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dell )(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = oracle2) ) ) ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dell )(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) EXTPROC_CONNECTION_DATA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) ) (CONNECT_DATA = (SID = PLSExtProc) (PRESENTATION = RO) ) ) This is listener.ora: # listener.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = C:\oracle\product\10.2.0\db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dell )(PORT = 1521)) ) ) I've tried changing the host name to localhost, 127.0.0.1 but none work I can execute queries from SQL Plus. There are NO firewalls on the system The .Net app and DB are on the same machine. Anyone?

    Read the article

  • kanban scrumish tool(s) to get started

    - by Davide
    After investigating a little bit scrum and kanban, I finally read this answer and decided to start using kanban, picking something from scrum (note that I'm working mostly by myself, and I do have read this question and its answers). Now, my question is: which tool would be best to get started? whiteboard and postit agilezen.com JIRA with greenhopper a spreadsheet (possibly on Google Docs) brightgreenprojects.com Agilo Target Process something else (please specify) Notes about each: I would lean towards the whiteboard, but there are several drawbacks (e.g. cannot make automatic charts, time measurements, metrics, and sometimes I work from home - where I need it most - and it's not convenient to carry :-) I don't want to remember another username/password (I promised to myself to signup only to OpenID-enabled services) My employer has JIRA but my group doesn't use it - I might ask for an account (it shouldn't require another password) and maybe later involve the rest of the group. But I don't know if they are using greenhopper and if it's a big deal installing it. I generally hate spreadsheets maybe overkill? I'd be happy to have a localhost instance, but it could be problematic to give access to the whole group (per network/firewalls) - not a deal-breaker but surely a concern What I'd like to get from this? being more productive tracking how much time I spend in any given task, possibly discussing the issue with my supervisor tracking what "blocks" me most often immediately see where I am compared to my schedule manage in a better way my long todo list (e.g. answering faster to the "what I should do next?" question) Do you have any suggestion? Note on the scrumish tag: read the Henrik Kniberg's PDF. He first introduced the definition of scrumish on page 9.

    Read the article

  • Static IP for dynamic IP

    - by scape279
    I have a dynamic IP address. I would like to have a static IP, but Virgin Media don't allow static IPs for residential broadband services, even if you ask them really nicely and offer to pay for it without switching to a business tariff. I am already registered with a dynamic DNS service which is updated by my router eg me.example.com will always resolve to my dynamic IP. This is fine for some circumstances, but not if you can only enter an IP address into configuration files/hardware etc like firewalls, subversion services etc etc. Is there a way I can have a static IP address 'forwarding' to my dynamic IP? Would a possible solution involve tunnelling? Setting up a private proxy? Please note the following: I am able to buy an IP address from my web host. I have access to a webserver and I am able to create custom DNS zones. I'm happy to have a webserver running at home if necessary also. I do not wish to change broadband providers. I have zero control over the services that require the IP address entering so I cannot tackle the problem that way round (services I need to access are at work). PS I've tried googling this issue, but it is very difficult to search for as most results are related to dynamic dns (which I already have set up and isnt quite what I'm after)

    Read the article

  • TimoutException occurs over a network but not locally

    - by Gibsnag
    I have a program with three WCF services and when I run them locally (i.e: Server and Clients are all on localhost) everything works. However when I test them across a network I get a TimoutException on two services but not the other. I've disabled the firewalls on all the machines involved in the test. I can both ping the server and access the wsdl "You have created a service" webpage from the client The service that works uses a BasicHttpBinding with streaming and the two which don't work use WSDualHttpBinding. The Services that use WSDualHttpBinding both have CallbackContracts. I apologise for the vagueness of this question but I'm not really sure what code to include or where to even start looking for the solution to this. Non-working bindings: public static Binding CreateHTTPBinding() { var binding = new WSDualHttpBinding(); binding.MessageEncoding = WSMessageEncoding.Mtom; binding.MaxBufferPoolSize = 2147483647; binding.MaxReceivedMessageSize = 2147483647; binding.Security.Mode = WSDualHttpSecurityMode.None; return binding; } Exception Stack Trace: Unhandled Exception: System.TimeoutException: The open operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. Server stack trace: at System.ServiceModel.Channels.ReliableRequestor.ThrowTimeoutException() at System.ServiceModel.Channels.ReliableRequestor.Request(TimeSpan timeout) at System.ServiceModel.Channels.ClientReliableSession.Open(TimeSpan timeout) at System.ServiceModel.Channels.ClientReliableDuplexSessionChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOpenOnce.System.ServiceModel.Channels.ServiceChannel.ICallOnce.Call(ServiceChannel channel, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade) at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at IDemeService.Register() at DemeServiceClient.Register() at DemeClient.Client.Start() at DemeClient.Program.Main(String[] args)

    Read the article

  • DotNetOpenAuth OpenID on ISA 2006 Reverse Proxy problem

    - by userb00
    I am trying to host my site that uses DotNetOpenAuth (OpenID) behind ISA 2006 (reverse proxy), and after it authenticated with a provider (such as Google), and it returns with a URL with %253A in the URL. However, ISA HTTP filter rejects the request. What I need to do is, on ISA web publishing rule, right click config HTTP policy properties uncheck "Verify Normalization" and it worked. Is this a problem on ISA 2006 generally? Are other firewalls having similar problems? Or, is it an OpenID or DotNetOpenAuth issue? Is it safe to disable Normalization checking on ISA? According to MSDN, quote "Web servers receive requests that are URL encoded. This means that certain characters may be replaced with a percent sign (%) followed by a particular number. For example, %20 corresponds to a space, so a request for http://myserver/My%20Dir/My%20File.htm is the same as a request for http://myserver/My Dir/My File.htm. Normalization is the process of decoding URL-encoded requests. Because the % can be URL encoded, an attacker can submit a carefully crafted request to a server that is basically double-encoded. If this occurs, Internet Information Services (IIS) may accept a request that it would otherwise reject as not valid. When you select Verify Normalization, the HTTP filter normalizes the URL two times. If the URL after the first normalization is different from the URL after the second normalization, the filter rejects the request. This prevents attacks that rely on double-encoded requests. Note that while we recommend that you use the Verify Normalization function, it may also block legitimate requests that contain a %."

    Read the article

  • Relay WCF Service

    - by Matt Ruwe
    This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration. We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running. Outside the firewall: Silverlight Application In the DMZ: WCF Service (Business Logic & Data Access Layer) Inside the LAN: Database I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ. Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated. Thanks, Matt

    Read the article

  • MSMQ on Win2008 R2 won’t receive messages from older clients

    - by Graffen
    Hi all I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16  | Next Page >