Search Results

Search found 13222 results on 529 pages for 'security gate'.

Page 449/529 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • Office Compatibility Pack and File Permissions

    - by hymie
    MS isn't my thing, so I hope somebody can give me a pointer. We have a Windows domain, with a Server-2003-SP1-Enterprise file server. One of the specific files is a MS Excel 2007 (XLSX) file created by user LK. In the "Security" preferences setting, about a half-dozen users (including me) have access to this file. LK is the owner and has "full control", while the rest of us have "Read" , "Read & Execute", and "Write" permission. LK is also the owner of the directory that this file resides in. I don't know if that's relevant. So far so good. My desktop machine has Windows XP SP3 , and Excel 2003 SP3 , and the "Office Compatibility Pack" which lets me read and write the new XLSX files. However, whenever I write the file, the permissions are changed. The newly-written file only has permissions for LK and me, and both are "Full control" So in short, what am I doing wrong, and how should I set this up to do it right, keeping the permissions on the file that were there when I started?

    Read the article

  • Random "not accessible" "you might not have permission to use this network resource"

    - by Jim Fred
    A couple of computers, both Win7-64 can connect to shares on a NAS server, at least most of the time. At random intervals, these Win7-64 computers cannot access some shares but can access others on the same NAS. When access is denied, a dialog box appears saying "\\myServer\MyShare02 not accessible...you might not have permission to use this network resource..." Other shares, say \\myServer\MyShare01, ARE accessible from the affected computers and yet other computers CAN access the affected shares. Reboots of the affected computers seem to allow the affected computer to connect to the affected shares - but then, getting a cup of coffee seems to help too. When the problem appears, the network seems to be ok e.g. the affected computers can access other shares on the affected server and can ping etc. Also Other computers can access the affected shares. The NAS server is a NetGear ReadyNas Pro. The problem might be on the NAS side such as a resource limitation but since only 2 Win7-64 PCs seem to be affected the most, the problem could be on the PC side - I'm not sure yet. I of course searched for solutions and found several tips addressing initial connection problems (use correct workgroup name, use IP address instead of server name, remove security restrictions etc) but none of those remedies address the random nature of this problem.

    Read the article

  • Enter network credentials as part of batch script

    - by Michael
    WinXP: I have several system services that are needed to run some machinery in my lab. The machine these services are running uses a lab login that has administrator rights. Our IS department, unfortunately, has it set up where at some point during the night the login "loses" the privilege level to start/stop these services. The account stays logged in, but the software controlling my hardware becomes unresponsive. In order to get things back up and running, I have to stop the system services and restart them. Because of the security settings, however, I have to re-enter the user password to start the service (even though the user was never logged out). That, I get the "This service cannot be started due to a logon failure" and I have to enter the password. What would be ideal is to have a batch script run before anyone gets into work that stops all of the necessary services, enters the user credentials when prompted, and then restarts them so that everything is ready for first shift to run. I assumed that using the Task Scheduler in Windows would work as it allows you to run batch files with a user's name and password, but this didn't seem to do the trick. With this setup I would arrive to find that all the services are stopped but not started again. (Presumably because the authentication failed.) The batch file is about as simple as it gets, all I have is: net stop "Service1" net stop "Serivce2" etc., then restart in reverse order based on dependency: net start "Service2" net start "Serivce1" What would it take to accomplish what I'm trying to do and restart the services?

    Read the article

  • Centos 6.2 postfix install dependency issues

    - by Mishari
    I am administrating a VPS running cPanel and I'm trying to install postfix. Redhat-release says the version is CentOS release 6.2 (Final) and uname -a says: Linux server.mydomain.com 2.6.32-220.el6.i686 #1 SMP Tue Dec 6 16:15:40 GMT 2011 i686 i686 i386 GNU/Linux This is how I'm installing postfix (I had tried to solve the problem earlier by installing epel). # yum install postfix Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirror.cogentco.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package postfix.i686 2:2.6.6-2.2.el6_1 will be installed --> Processing Dependency: mysql-libs for package: 2:postfix-2.6.6-2.2.el6_1.i686 --> Finished Dependency Resolution Error: Package: 2:postfix-2.6.6-2.2.el6_1.i686 (centos-burstnet) Requires: mysql-libs You could try using --skip-broken to work around the problem Attempts to install mysql-libs tells me several files conflict with "MySQL-server-5.1.61-0.glibc23.i386" I'm not sure why or how this is happening, does anyone know how to resolve this? Surely Centos 6.2 could not have shipped with a broken postfix.

    Read the article

  • Connecting to Server 2008 shares fails

    - by Chris J
    I'm having problems getting a reliable share working on an x64 Server 2008 R1 SP1 server. All works well after a reboot, but after some time (within a day) the shares become unavailable to XP and Server 2003 servers. Interestingly, they remain available to other Server 2008 servers. On trying to access \\server\share, Server 2003 returns immediately and simply gives me the message "The specified network name is no longer available", XP takes a minute or two to timeout before giving the same message. There doesn't seem to be anything in the event logs indicating a problem. Doing some googling over the last day or two I've seen the following blamed: Bad network drivers ... I've updated to the latest drivers with no result Symantec anti-virus ... we're not using it (currently no AV on the server) Receive window auto-tuning ... I've disabled with netsh int tcp set global autotuninglevel=disabled and netsh int tcp set global rss=disabled None of these have had an effect. Windows Firewall is currently disabled. As other Server 2008 boxes (both x32 and x64) can connect, I can only assume that there's some new security configuration that's not quite right - or there's an AD issue that I need to trace, but don't know where to start. Even if anyone doesn't know how to resolve, if someone knows what I need to look for with Wireshark this would be a help.

    Read the article

  • Slow, choppy video playback with nVidia 8600GT

    - by user5351
    I have an nVidia 8600GT card (made by EVGA) on a machine with Windows Vista (AMD Athlon X2 processors) and four gigs of ram. It runs pretty good, but I have had some slow/choppy/stuterring video playback issues whenever watching flash videos on Youtube or other sites. The problem is there with both Firefox and IE flash videos, but is maybe worse with Firefox. I also tried Linux with nVidia's binary drivers and it was about the same. I downloaded EVGA precision which allows me to control stuff like the fan and clock speed. The card's temp (in both Vista and Linux) is usually at 66C when idle (not playing a game or watching anything). It goes up a little when watching a video (maybe 68-72C). Any ideas on how to fix this? UPDATE: The issues are both with full screen and embedded flash videos. I have Flash 10.0.32.18 (always make sure I use most recent for security), and the CPU is an AMD Athlon 64 X2 Dual Core Processor 4000+ at 2.11 GHZ. The current GPU driver installed is the most recent GeForce one from last July.

    Read the article

  • Developer hardware autonomy in a managed desktop environment [closed]

    - by Troy Hunt
    I’m looking for some feedback on how developer PCs are managed within environments that have a strict managed desktop policy (normally large corporations). For example, many corporate environments control the installation of software and the deployment of patches and virus updates through a centralised channel. This usually means also dictating the OS version and architecture (32 bit versus 64 bit) which will likely also mean standardised hardware configurations. I’m particularly interested in feedback from developers who work in this sort of environment but have a high degree of autonomy over their machines. This might mean choosing your own hardware vendor, OS type and version and perhaps how the machines are built and maintained. I have several specific questions: How do you satisfy the needs of security, governance etc whilst maintaining your autonomy? For example, how do you address concerns about keeping virus definitions and OS patches up to date? Do you have a process for gaining exemption from standard desktop builds and if so, what do you need to demonstrate in order to get this? How have you justified this need to the decision makers? Essentially, what is the benefit to your role as a developer by having this degree of autonomy? Thanks very much everyone. Update: There's a great post from Jean-Paul Boodhoo which addresses the developer tool component of the quesiton here: http://blog.jpboodhoo.com/TheFallacyOfTheStandardizedDeveloperMachineimage.aspx

    Read the article

  • Cannot write to directory after taking ownership

    - by jeff charles
    I had a directory on an internal hard-drive that was created in an old Windows 7 install. After re-installing my operating system, when I try to create a new directory inside that directory, I get an Access Denied message. This isn't a protected directory, just a random directory I created at the drive root (that drive was not the C drive in either install). I tried to take ownership by opening folder properties, going to the Security tab, clicking on Advanced, going to Owner tab, clicking on Edit, selecting my user account, checking Replace owner on subcontainers and objects, and clicking Apply. There were no error messages and I closed the dialogs. I rebooted, checked the owner on that folder and a couple subfolders and it appears to be set correctly. I am still getting an Access Denied message however when trying to create a directory in it. I've also tried using attrib -R . to remove any possible readonly attribute inside the directory in an admin command prompt but am still unable to create a directory using a non-admin prompt (it does work in an admin prompt). Is there anything I can do to get write access to that folder and it's subcontents in a non-elevated context without disabling UAC?

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Redirecting or routing all traffic to OpenVPN on a Mac OS X client

    - by sdr56p
    I have configured an OpenVPN (2.2.1) server on an Ubuntu virtual machine in the Amazon elastic compute cloud. The server is up and running. I have installed OpenVPN (2.2.1) on a Mac OS X (10.8.2) client and I am using the openvpn2 binary to connect (in opposition to other clients like Tunnelblick or Viscosity). I can connect with the client and successfully ping or ssh the server through the tunnel. However, I can't redirect all internet traffic through the VPN even if I use the push "redirect-gateway def1 bypass-dhcp" option in the server.conf configurations. When I connect to the server with these configurations, I get a successful connection, but then an infinite series of error messages: "write UDPv4: No route to host (code=65)". Traffic routing seems to be compromised because I am not able to access anything anymore, not even the OpenVPN server (by pinging 10.8.0.1 for instance). This is beyond me. I am finding little help on the web and don't know what to try next. I don't think it is a problem of forwarding the traffic on the server since, first, I have also took care of that and, second, I can't even ping the VPN server locally through the tunnel (or ping anything at all for that matter). Thank you for your help. Here is the server.conf. file: port 1194 proto udp dev tun ca ca.crt cert ec2-server.crt key ec2-server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 And the client.conf file: client dev tun proto udp remote servername.com 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert Toto5.crt key Toto5.key ns-cert-type server comp-lzo verb 3 Here is the connection log with the error messages: $ sudo openvpn2 --config client.conf Wed Mar 13 22:58:22 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:22 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:22 2013 LZO compression initialized Wed Mar 13 22:58:22 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:22 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:22 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:22 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:22 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:22 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:22 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:22 2013 TLS: Initial packet from 54.234.43.171:1194, sid=ffbaf343 d0c1a266 Wed Mar 13 22:58:22 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:22 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:22 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:23 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:58:25 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:58:25 2013 PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:58:25 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:58:25 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:58:25 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:58:25 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:25 2013 Initialization Sequence Completed ^CWed Mar 13 22:58:30 2013 event_wait : Interrupted system call (code=4) Wed Mar 13 22:58:30 2013 TCP/UDP: Closing socket Wed Mar 13 22:58:30 2013 /sbin/route delete -net 10.8.0.0 10.8.0.5 255.255.255.0 delete net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:30 2013 Closing TUN/TAP interface Wed Mar 13 22:58:30 2013 SIGINT[hard,] received, process exiting toto5:ttntec2 Dominic$ sudo openvpn2 --config client.conf --remote ec2-54-234-43-171.compute-1.amazonaws.com Wed Mar 13 22:58:57 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:57 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:57 2013 LZO compression initialized Wed Mar 13 22:58:57 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:57 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:57 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:57 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:57 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:57 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:57 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:57 2013 TLS: Initial packet from 54.234.43.171:1194, sid=a0d75468 ec26de14 Wed Mar 13 22:58:58 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:58 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:58 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:59:00 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:59:00 2013 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:59:00 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:59:00 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:59:00 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:59:00 2013 /sbin/route add -net 54.234.43.171 0.0.0.0 255.255.255.255 add net 54.234.43.171: gateway 0.0.0.0 Wed Mar 13 22:59:00 2013 /sbin/route add -net 0.0.0.0 10.8.0.5 128.0.0.0 add net 0.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 128.0.0.0 10.8.0.5 128.0.0.0 add net 128.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 Initialization Sequence Completed Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) ... The routing table after a connection WITHOUT the push redirect-gateway (all traffic is not redirected to the VPN and everything is working fine, I can ping or ssh the OpenVPN server and access all other Internet resources through my default gateway): Destination Gateway Flags Refs Use Netif Expire default user148-1.wireless UGSc 50 0 en1 10.8/24 10.8.0.5 UGSc 2 7 tun0 10.8.0.5 10.8.0.6 UH 3 2 tun0 127 localhost UCS 0 0 lo0 localhost localhost UH 6 6692 lo0 client.openvpn.net client.openvpn.net UH 3 18 lo0 142.1.148/22 link#5 UCS 2 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 50 0 en1 76 user150-173.wirele localhost UHS 0 0 lo0 142.1.151.255 ff:ff:ff:ff:ff:ff UHLWbI 0 2 en1 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSWi 0 0 en1 71 The routing table after a connection with the push redirect-gateway option enable as in the server.conf file above (all internet traffic should be redirected to the VPN tunnel, but nothing is working, I can't access any Internet ressources at all): Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 1 0 tun0 default user148-1.wireless UGSc 7 0 en1 10.8/24 10.8.0.5 UGSc 0 0 tun0 10.8.0.5 10.8.0.6 UHr 6 0 tun0 54.234.43.171/32 0.0.0.0 UGSc 1 0 en1 127 localhost UCS 0 0 lo0 localhost localhost UH 3 6698 lo0 client.openvpn.net client.openvpn.net UH 0 27 lo0 128.0/1 10.8.0.5 UGSc 2 0 tun0 142.1.148/22 link#5 UCS 1 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 1 0 en1 833 user150-173.wirele localhost UHS 0 0 lo0 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSW 0 0 en1

    Read the article

  • How Would I Restrict a Linux Binary to a Limited Amount of RAM?

    - by Ken S.
    I would like to be able to limit an installed binary to only be able to use up to a certain amount of RAM. I don't want it to get killed if it exceeds it, only that that would be the max amount that it could use. The problem I am facing is that I am running an Apache 2.2 server with PHP and some custom code that a developer is writing for us. The problem is that somewhere in there code they launch a PHP exec call that launches ImageMagick's 'convert' to create a resized image file. I'm not privy to a lot of details to the project or the code, but need to find a solution to keep them from killing the server until they can find a way to optimize the code. I had thought that I could do this with /etc/security/limits.conf and setting a limit on the apache user, but it seems to have no effect. This is what I used: www-data hard as 500 If I understand it correctly, this should have limited any apache user process to a maximum to 500kb, however, when I ran a test script that would chew up a lot of RAM, this actually got up to 1.5GB before I killed it. Here is the output of 'ps auxf' after the setting change and a system reboot: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 5268 0.0 0.0 401072 10264 ? Ss 15:28 0:00 /usr/sbin/apache2 -k start www-data 5274 0.0 0.0 402468 9484 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start www-data 5285 102 9.4 1633500 1503452 ? Rl 15:29 0:58 | \_ /usr/bin/convert ../tours/28786/.…. www-data 5275 0.0 0.0 401072 5812 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start Next I thought I could do it with Apache's RlimitMEM setting, but get the same result of it not getting limited. Here is what I have in my apache.conf file: RLimitMEM 500000 512000 It wasn't until many hours later that I figured out that if the process actually reached that amount that it would die with an OOM error. Would love any ideas on how to set this limit so other things could function on the server, and all of them could play together nicely.

    Read the article

  • How can I install .NET framework 3.5 on XP machines without internet connection?

    - by EricSchaefer
    I want to install .NET framework 3.5 on a couple of machines that do not have internet access. If I install the "no internet access"-package it still wants to download something. How can I figure out what is missing? Are there other installation packages? Edit: I would present screenshots but I cannot upload anything from here and the shots would be in german. So I present only the text translated back to english... Installing the "full redistributable package": At the bottom of the license agreement page it display this text: Size of download file: 67 MB Appoximate download time: 2h 44min (56KBit/s) 18min (512KBit/s) It shows the text even if I installed Windows Installer 3.1. After agreeing it displays the "Download and Installation Status"-Dialog with a progress bar labeled "Download:" and Status: Connection to server attempted (try X of 5). Total Download Status: 56MB/67MB I tried it in a VM with no network connection. It tries 5 times while the progress bar shows progress. Later the progress bar is labeled "Installation:". Even later it reports problems during setup and provides two buttons "Send Report Later" and "Don't Send". Now here it comes: "Setup completed" and "Microsoft .NET Framework 3.5 has been deinstalled successfully." (Emphasis is mine) "It is recommended to install current service packs and security updates. More information at Windows Update (link)." Edit2: Installed Service Pack 3, but still no success.

    Read the article

  • W7-pro indexing mydoc on disk partition does not work

    - by Yvan Thery
    I am working on a HP-7100 mini tower running W7 Pro 64bits. My Local HD includes C:/ + 2 disk partitions : all my documents are located on disk partition L:/ and all my media files are on disk partition M:/ The indexing process works well on C:/ and M:/ but does not index the L:/ any more also all of them are allowed to be indexed, also the system is present on all drive security tabs. I have tested to rebuilt the indexing file with a new setting including few directories present on drive C/M/L but still with L: does not work ! One more thing I can tell you is that even after rebuilding the indexing file, I can find some residual directories or files which are out of the test selection. It is like unerased components remaining in the indexing database. As I do not know precisely how the indexing process works it is hard to know what to do ... Recently I had a bad time after using a past restoration procedure ... maybe it did corrupt the indexing file ???? If I start indexing the all L:/ disk partition the system stop at 39 found index also many more are existing .... Does any one from you guys could advise the process to create a new indexing database ... ? Any idea to get out of this mess ? Many thanks for assistance Yvan

    Read the article

  • Can I restore one of my user's profiles in Vista?

    - by Rod
    My youngest daughter uses my 4 year old laptop, which has Windows Vista installed. Somehow she got some Trojan (Vista Internet Security). (I'd love to know how that happened, seeing as how she is a standard user, and I have VIPRE as my AV.) Anyway, I ran a deep anti-virus scan using VIPRE, which identified it. I decided to delete everything that it identified. Now she cannot use anything in her profile. If she tries to bring up the browser, it recycles over and over again a dialog box asking which program to use. If I try to run any program at all, it doesn't know what to do. For example, it is totally lost trying to run the command line. If I bring up Windows Explorer and navigate to Windows\System32 and try to run the command line, or anything at all from there, it goes "Huh?" What in heck has happened?? Is it possible to fix this, and if so, how? As an aside, I can log into my account (my account on that machine is an administrator) and it works fine.

    Read the article

  • VPN service into 192 network

    - by tophersmith116
    I'm thinking about setting up a security testing lab. I work on a switched network, and that just makes for unnecessary headaches when doing testing. I'd like to create a 192 network with a few machines inside for DBs and AppServers etc. I will need a pivot machine that connects to both the outer network and the 192 (for automation purposes). But I'd like to be able to connect into the 192 network with my own machine from the outer network as the "attacking" machine (rather than have dedicated attack machines inside the 192 network). Therefore, I'd like to have the pivot server be a VPN server as well, so that my machine can VPN into the 192 network from the outer network. First off, is this even possible? Can I have a single computer with two NICs where a VPN service allows remote connections into the 192? Secondly, I'd like to have multiple outer clients connect to the VPN. Does anyone have any suggestions? I've used Hamachi well before, but I've also seen some good stuff from OpenVPN.

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >