Search Results

Search found 21090 results on 844 pages for 'webservices client'.

Page 372/844 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • nginx & php-fpm and custom header

    - by nixer
    I would like to pass some custom header (ACCESS_TOKEN) from client RESTful application (JS) to application server (php-fpm). I had read that nginx should pass all http headers to php, but somehow it does not come to my php :( I can see it in firebug http://o7.no/N6DM7q but can't see it in $_SERVER variable. it just does not exist in $_SERVER array. I'm thinking that i need to pass it manually. Now my config looks like that: location @php-fpm { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_param REQUEST_URI /index.php$request_uri; fastcgi_param SCRIPT_FILENAME /htdocs/index.php; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /htdocs; } } and when I add new line in location definition: location @php-fpm { include /etc/nginx/fastcgi_params; ... fastcgi_param ACCESS_TOKEN $http_access_token; } } or even if i will add it into fastcgi_params file it does not help :( if I put into location part next line: fastcgi_param ACCESS_TOKEN $http_access_token; then in php it has empty value :( how I can pass custom header from client to backend (php) via nginx ?

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • Simple way to set up port knocking on Linux?

    - by Ace Paus
    There are well known benefits of Port Knocking utilities when utilized in combination with firewall IP table modification. Port Knocking is best used to provide an additional layer of security over other tools such as the OpenSSH server. I would like some help setting it up on a ubuntu server. I looked at some port knocking implementations here: PORTKNOCKING - A system for stealthy authentication across closed ports. IMPLEMENTATIONS http://www.portknocking.org/view/implementations fwknop looked good. I found an Android client here. And fwknop (both client and server) is in the ubuntu repos. Unfortunately, setting it up (on the server) looks difficult. I do not have iptables set up. My proficiency with iptables is limited (but I understand the basics). I'm looking for a series of simple steps to set it up. I only want to open the SSH port in response to a valid knock. Alternatively, I would consider other port knocking implementations, if they are much simpler to set up and the desired Linux and Android clients are available.

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • VLAN Tagging Traffic on Cisco Switch

    - by David W
    I have a situation where I'm setting up multiple VLANS on a pfSense firewall on the same physical interface for a client. So in pfSense, I now have VLAN 100 (employees) and VLAN 200 (students - student computer lab). Downstream from pfSense, I have a Cisco SG200 switch, and coming off of the SG200 is the student lab (running on a Catalyst 2950. Yes, that's old, but it works, and this is a poor nonprofit we're talking about). What I'd like to do is tag everything on the network as VLAN 100, except for the student computer lab. Earlier today when I was on-site with the client, I went into to the old Catalyst 2950, and assigned all of its ports to access VLAN 200 (switchport mode access vlan 200) without setting up a trunk on the Catalyst or on the SG200. Looking back on it, I now understand why internet in the lab broke. I reverted the lab back to the default VLAN1 (we're still running on a different firewall - we haven't deployed pfSense -, and the traffic is still separated physically). So my question is, what do I need to do in order to properly deploy this scenario? I believe the correct answer is: Ensure VLANs 100 and 200 are setup in pfSense, and that DHCP is operating correctly (on separate subnets) Setup a trunkport VLAN that allows both 100 & 200 traffic, and plug that port directly into pfSense. Setup a VLAN 200 trunkport on the SG200 (It's not running iOS, but if it were, the command would be switchport trunk native vlan 200), which will then plug into the Catalyst 2950. Setup a VLAN 200 trunkport on the Catalyst 2950 (that is plugged into the SG200 VLAN200 port with the same command - switchport trunk native vlan 200) Setup the rest of the ports on the old Catalyst 2950 in the lab to be access ports on VLAN200. Is there anything that I'm missing, or do I need to tweak any of these steps, in order to properly segment the network traffic?

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • Vlan on DD-WRT v24 filesharing without internet

    - by user148888
    I'm planning to do a vlan with shared files without internet. my vlan is working correctly until I could lock the internet but I can not share files only I can ping from 172.168.1.x to 172.168.2.x but not backward. can u help me please?? This is my config. WRT54g DD-wrt v24 vlan0 eth 1,2,3 172.168.1.1/24 vlan1 Wan vlan2 eth 4 172.168.2.1/24(don't want internet here just Lan Conection) Pc (my pc) 172.168.1.10/24 gtway 172.168.1.1 (with internet) Ubiquity Nanostation loco M2 172.168.2.20/24(AP)(connect to eth 4 vlan2)(don't want internet here) Ubiquity Nanostation loco M2 172.168.2.21/24(Client)(don't want internet here) Friend pc 172.168.2.115/24(connected from the client)(don't want internet here) Any Help Please.. ![Lan Map][1] ![Vlan Config1][2] ![Vlan Config2][3] ![Vlan Config3][4] ![Command to block internet][5] [url=http://imageshack.us/photo/my-images/194/lanxl.jpg/][img=http://imageshack.us/a/img194/4197/lanxl.th.jpg][/url] [url=http://imageshack.us/photo/my-images/600/commandmf.jpg/][img=http://imageshack.us/a/img600/9530/commandmf.th.jpg][/url] [url=http://imageshack.us/photo/my-images/811/57398524.jpg/][img=http://imageshack.us/a/img811/1917/57398524.th.jpg][/url] [url=http://imageshack.us/photo/my-images/717/48064277.jpg/][img=http://imageshack.us/a/img717/829/48064277.th.jpg][/url] [url=http://imageshack.us/photo/my-images/685/21456517.jpg/][img=http://imageshack.us/a/img685/256/21456517.th.jpg][/url]

    Read the article

  • CentOS 5 VPN Server won't work

    - by Miro Markarian
    I have a CentOS 5 server configured to be both a L2TP server and a PPTP server + a radius server for hosting the AAA. My problem is that, the L2TP works great and I can connect to it, but can't connect to PPTP and every-time it ends up with error #619 when it gets to the verifying username and password section. Here is the log I got from /var/log/messages Dec 17 07:40:02 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection started Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Starting call (launching pppd, opening GRE) Dec 17 07:40:03 serverdl pppd[8571]: Plugin radius.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADIUS plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin radattr.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADATTR plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: pptpd-logwtmp: $Version$ Dec 17 07:40:03 serverdl pppd[8571]: pppd 2.4.4 started by root, uid 0 Dec 17 07:40:03 serverdl pppd[8571]: Using interface ppp0 Dec 17 07:40:03 serverdl pppd[8571]: Connect: ppp0 <--> /dev/pts/2 Dec 17 07:40:03 serverdl pptpd[8570]: GRE: read(fd=7,buffer=80515e0,len=8260) from network failed: status = -1 error = Protocol not available Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: GRE read or PTY write failed (gre,pty)=(7,6) Dec 17 07:40:03 serverdl pppd[8571]: Modem hangup Dec 17 07:40:03 serverdl pppd[8571]: Connection terminated. Dec 17 07:40:03 serverdl pppd[8571]: Exit. Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection finished Just yesterday when I hadn't set up the L2TP yet PPTP was working great but then I uninstalled it and removed all it's config from /etc/* and installed L2TP first and then installed PPTP after it. and then it stopped to work. I believe it must be a radiusclient issue because both of the PPTP and L2TP services use radius to authenticate. And another thing I think must be the issue is that when assigning IPs to the PPP interfaces, I have done the following config. Is that right? For L2TP: localip 10.10.10.1 remoteip 10.10.10.2-254 For PPTP: localip 10.10.9.1 remoteip 10.10.9.2-254

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

  • server dosnt produce syn-ack

    - by steve
    I have a small program that take packets from the nfqueue . change the ip.dst to my server dst (and ttl), recalc checksum and return the packet to the nfqueue. The server and the client are linux and apache web server is run on the server and listen on port 80. i open telnet in the client to fake ip on port 80 . the packet is changed by my program and sent to the server, but the target server (the new dst ip) get the syn , but dosnt generate syn-ack (the server also belong to me , so i can see that it get the syn with checksum correct , but dosnt generate syn-ack). if i do the same , but with the real server ip as the dest, the tcp handshake is done correct (in this case i just change the ttl and checksum. The change that i did to the ttl is just a test to see that my checksum calc is ok). i compare the sys's , but didnt find and difference. Any idea? Ps. i saw this topic : Server not sending a SYN/ACK packet in response to a SYN packet and i set all flags the same , but this didnt help. Thank you

    Read the article

  • How could one archive all emails sent from employees?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. They are currently hosted through GoDaddy for web and email. For legal reasons the client would like to archive emails sent by their employees. Currently the emails are all done through POP3 so all the email is basically housed in files on individual machines (remember, small business). It's been proposed an inexpensive solution to this would be to have all emails BCC'd to a main account so that conversations with the outside would could be archived and tracked. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email? UPDATE: I've modified the title to reflect employees not users. The goal of this is to archive sent emails for legal reasons. This is something the employees will be cognizant of and on board with. The bottom line here is to basically emulate a feature of a larger-class platform through a smaller, cheaper platform. If the answer is "can't do it, buy an Exchange license" that's fine. My apologies for phrasing this so poorly. I understand why there was so much confusion.

    Read the article

  • Failure to connect to admin share pops up dialog

    - by Jan
    I'm having an issue with a curious error message when accessing the administrative share on a remote machine. Specifically, the client is logged in as the domain administrator on the machine A, and runs some code that tries to access the admin share on B (a domain member). The access is done in .NET, along these lines (though I am not sure if the method of access makes a difference): string path = @"\\B\admin$"; if (Directory.Exists(path)) { try { path += @"\temp\"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } path += "myfile_remote"; File.Copy("myfile", path); Now, on some machines this fails. That is not a big problem as we have a fallback. I'd like to know why but it is not the real issue. The problem is that running this piece of code causes a dialog box to pop up for the logged-in user on B, saying "network error trying to access \\B\admin$\temp\myfile_remote. Contact the network administrator and ask for the correct permissions". Unfortunately, it is a foreign language Windows so I'll spare you all posting a screenshot. It is skinned like a standard Windows dialog box. Why exactly is that dialog box popping up for the user and is there anything I can do about it? Edit to add: B is a Windows 7 Enterprise installation. The client is not aware of any GPO policies being installed. There is AV from Trend Micro installed.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • How to stop NAT dropping idle connections?

    - by WGH
    I have a TCP connection that can be idle for many hours. The traffic is flowing from the server to the client only. One might say it's kind of push notification. My home router, however, tends to drop the connection silently after 20 minutes (the value of /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established). The server detects the loss once it tries to send anything (I assume it receives RST from the router itself). As client never sends anything, it never detects the loss. RFC 5382 "NAT Behavioral Requirements for TCP" states the following: A NAT can check if an endpoint for a session has crashed by sending a TCP keep-alive packet and receiving a TCP RST packet in response. It makes sense. It's much more effective than sending keep-alives by the host itself (as only NAT knows its own timeout). And probably not hard to implement. Is there any NAT solutions implementing this? It would be great if there was a way to enable this in iptables.

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Install MS Office on Windows Server 2008 - in support of Quickbooks RemoteApp

    - by steampowered
    How can I install MS Office on Windows Server 2008? The purpose would be to enable Quickbooks to be able to export to Excel. Quickbooks is set up to run as a RemoteApp in a Terminal Server environment. The Quickbooks applicaiton senses whether or not Excel is installed and will not allow the user to create an Excel report unless Excel is actually installed on the client running Quickbooks. Since the client and the server are the same machine in a Terminal Server environment, Excel must be installed on Windows Server for the Quickbooks Excel exporting feature to work in this setup. There is no need to actually use Excel in a Terminal Services environment. We only need to generate the Excel files using the server, then we can use an installed version Excel on a regular Windows 7 machine to work with the Excel file. MS Office does not normally install on Windows Server. Is there any way to buy a special license? Could we somehow fool Quickbooks into thinking Excel is installed, if that would work?

    Read the article

  • Iptables rules, forward between two interfaces

    - by Marco
    i have a some difficulties in configuring my ubuntu server firewall ... my situation is this: eth0 - internet eth1 - lan1 eth2 - lan2 I want that clients from lan1 can't communicate with clients from lan2, except for some specific services. E.g. i want that clients in lan1 can ssh into client in lan2, but only that. Any other comunication is forbidden. So, i add this rules to iptables: #Block all traffic between lan, but permit traffic to internet iptables -I FORWARD -i eth1 -o ! eth0 -j DROP iptables -I FORWARD -i eth2 -o ! eth0 -j DROP # Accept ssh traffic from lan1 to client 192.168.20.2 in lan2 iptables -A FORWARD -i eth1 -o eth2 -p tcp --dport 22 -d 192.168.20.2 -j ACCEPT This didn't works. Doing iptables -L FORWARD -v i see: Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 33 144 DROP all -- eth1 !eth0 anywhere anywhere 0 0 DROP all -- eth2 !eth0 anywhere anywhere 23630 20M ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1 any anywhere anywhere 175 9957 ACCEPT all -- eth1 any anywhere anywhere 107 6420 ACCEPT all -- eth2 any anywhere anywhere 0 0 ACCEPT all -- pptp+ any anywhere anywhere 0 0 ACCEPT all -- tun+ any anywhere anywhere 0 0 ACCEPT tcp -- eth1 eth2 anywhere server2.lan tcp dpt:ssh All packets are dropped, and the count of packets for the last rule is 0 ... How i have to modify my configuration? Thank you. Regards Marco

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • How do I configure VMware View location-based printing to use Active Directory Groups?

    - by Jason Pearce
    I am attempting to configure VMware View 4.5's Location-Based Printing, which leverages an included OEM version of ThinPrint, to assign printers to active directory groups. The location-based printing feature maps printers that are physically near client systems to VMware View desktops. I am using the Active Directory group policy setting AutoConnect Location-based Printing for VMware View, which is located in the Microsoft Group Policy Object Editor in the Software Settings folder under Computer Configuration. The AutoConnect Location-based Printing for VMware View appearst to be just a name translation table. It permits me to assign a specific printer or printers to an IP Range, Client Name, Mac Address, User, or User Group. I'm attempting to assign printers to active directory user groups. I have created a new active directory group for each printer that I intend to use in VMware View desktop pools. I will then assign active directory users to the active directory groups that represent each network printer. Example: doej is a member of the PTR-FLOOR2-NORTH-ROOM255 active directory group. Using AutoConnect, I assigned the group to receive a network printer by adding PTR-FLOOR2-NORTH-ROOM255 in the User/Group column. Problem: When doej logs in to his VDI session, the printer is not present. However, if I use a wildcard "*" in the User/Group column instead of the specific PTR-FLOOR2-NORTH-ROOM255 active directory group, the printer is present and functions as designed. Alternatives: I have tried assigning printers to active directory groups within AutoConnect in the following ways, all unsuccesfull: PTR-FLOOR2-NORTH-ROOM255 domainexample\PTR-FLOOR2-NORTH-ROOM255 domainexample.local\PTR-FLOOR2-NORTH-ROOM255 Confirmation: The information used to map the printer to the VMware View desktop is stored in a registry entry on the View desktop in HKEY_LOCAL_MACHINE\SOFTWARE\Policies\thinprint\tpautoconnect. For each of these examples, I have reviewed the registry entry and can confirm that the desktop is receiving the information from the AutoConnect translation table. Summary: Can anyone provide an example of how to configure VMware View 4.5's Location-Based Printing so that I may assign network printers to active directory groups via the included AutoConnect tool? I would welcome a clear example of a working configuration. Thank you.

    Read the article

  • mount.nfs: access denied by server while mounting (Kerberos authentication)

    - by Nick
    There's plenty of references to this error on Goggle, and even a question here with the same title, but it seems that "access denied by server while mounting" is a catch-all error. I've tried suggestions that others have used to fix this problem, but they did not work in my case. I'm trying to set-up a Kerberos-based NFS file server with shared homes for a Linux network. I'm using Ubuntu 11.04 Servers and clients. When trying to mount a share using: mount 192.168.1.115:/export/home/ /media/tmp I get: mount.nfs: access denied by server while mounting 192.168.1.115:/export/home/ This is the same if I mount it from a client machine or from the server itself. On the server, in /var/log/syslog I get: Aug 25 06:22:37 nfs mountd[1580]: authenticated mount request from 192.168.1.115:835 for /export/home (/export/home) Aug 25 06:22:37 nfs mountd[1580]: authenticated unmount request from 192.168.1.115:766 for /export/home (/export/home) Which is odd, since it says it's authenticated the request, not denying it. /etc/exports: /export *(rw,fsid=0,crossmnt,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) /export/home *(rw,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) On client: me@dt1:/$ rpcinfo -p 192.168.1.115 program vers proto port 100000 2 tcp 111 portmapper 100024 1 udp 37320 status 100024 1 tcp 48460 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 100227 3 tcp 2049 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 100227 3 udp 2049 100021 1 udp 58625 nlockmgr 100021 3 udp 58625 nlockmgr 100021 4 udp 58625 nlockmgr 100021 1 tcp 49616 nlockmgr 100021 3 tcp 49616 nlockmgr 100021 4 tcp 49616 nlockmgr 100005 1 udp 45627 mountd 100005 1 tcp 60265 mountd 100005 2 udp 45627 mountd 100005 2 tcp 60265 mountd 100005 3 udp 45627 mountd 100005 3 tcp 60265 mountd Any suggestions I could try?

    Read the article

  • Opera's problem with magnet-links

    - by Dmitriy Matveev
    I've encountered following problem while using Opera web browser: When I click on a magnet link on some web page like magnet:?xt=urn:tree:tiger:CXW6MJFRNOEFU2STCBWWOIYZLVCR2FTR37SQCXY&xl=352342016&dn=ER%20-%207x16%20-%20Witch%20Hunt.avi Opera is asking me if I want to open that link with my DC++ client. If I click 'Yes' button then my DC++ client is correctly "opening" the clicked magnet link and performing some action on it. There is also an option "Do not show this dialog again" in Opera's dialog, but it doesn't seem to be working correctly. If I check that option before answering 'Yes' and then click on other magnet link of the same kind the Opera will again ask me about how to open new link. I haven't found protocol association in 'Control Panel Default Programs Set Associations' part of my Windows Vista settings, but if I paste magnet link in "Run" dialog then Vista will handle that link perfectly. I've tried to find out how to manually set protocol association in Opera and found 'Programs' page of browser's advanced settings. There I discovered that instead of storing protocol to application associations Opera tries to store per-link associations (There are several entries with exact links as they was on web page as value of protocol field). If I click on the links which are already stored in Opera's protocols associations browser will ask me about them again. I haven't found any information on how to resolve this problem on the internet, maybe someone on this site will be able to help me.

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >