Search Results

Search found 5566 results on 223 pages for 'behind'.

Page 193/223 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • VPN pptp connection Unable to pass through linux iptables

    - by user221844
    I have set up a windows VPN server behind Linux - Ubuntu box that is working as firewall and proxy server. Now I want people from outside to be able to connect to the VPN server, but the connection is not being established and I get on the client an error 619. I have checked the problem on the internet and it seems a firewall issue. what should I do to make the connection established through the firewall? here is below the information about my setup Firewall-External-IF-IP: 172.16.1.100 Firewall-LAN-IF-IP: 192.168.1.1 VPN-Server-IP: 192.168.1.10 and below is my iptables file content: #Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *filter :INPUT ACCEPT [162000:140437619] :FORWARD ACCEPT [23282:27196133] :OUTPUT ACCEPT [185778:143961739] :LOGGING - [0:0] -A INPUT -p gre -j ACCEPT -A INPUT -s 192.168.1.10/32 -p tcp -m tcp --sport 1723 -j ACCEPT -A INPUT -s 192.168.1.10/32 -p udp -m udp --sport 1723 -j ACCEPT -A FORWARD -s 192.168.1.0/24 -o EXT_IF -j ACCEPT -A FORWARD -s 192.168.1.0/24 -i EXT_IF -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p tcp -m tcp --dport 1723 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p tcp -m tcp --sport 1723 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p gre -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p gre -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p tcp -m tcp --dport 1723 -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p udp -m udp --dport 1723 -j ACCEPT COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *nat :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :FORWARD ACCEPT [8527694:6271564562] :OUTPUT ACCEPT [14748508:11899678536] :POSTROUTING ACCEPT [23271280:18170828012] COMMIT # Completed on Thu May 29 12:40:18 2014 hope that I find the solution here ....!! :(

    Read the article

  • Establishing a web page bookmarking process - looking for ideas to improve

    - by Matt
    Like many others, I have a process for bookmarking web pages to read later. My requirements for web page bookmarking are: Ability to bookmark pages must be available from all (within reason) platforms - PC/browser, mobile device, etc. Bookmarks must be centrally stored (implicit from #2) so that I can read the bookmarks from anywhere/any device Full text of web pages must be stored Bonus features would be: Bookmarks and page content should be full text searchable Maintain an archive indefinitely Distinguish between what's read vs. unread Bookmarked page content is cleaned up, e.g. ads eliminated, unnecessary html removed, pages better formatted for reading My current process (which addresses most of these requirements) is as follows: I set up a Gmail account with 2 labels, "Bookmarks Unread" and "Bookmarks Read" Gmail filters set up such that depending on the form of the address (using Gmail's '+string' functionality in addresses), the incoming bookmark gets labeled appropriately On each of my browsers/devices, I have an address book entry for [email protected] and [email protected]. If I want to clean up the page content, I use the Readability bookmarklet which does a great job of giving me the essential content only Anywhere I have Firefox, I use the Send Page by Email extension which, with 2 clicks, allows me to send the cleaned-up Readability page URL and content to one of the above email addresses. Where I don't have Firefox (e.g. iPhone or other mobile device) I use the native ability to send the current link via email (most/all apps have them, including the browser, RSS readers, NYTimes, etc.). In most cases (unless it's built into the particular app), this won't include the page body. The process is almost perfect. I've got the central access and ubiquitous access of Gmail as the storage mechanism, full text searchability (due to Gmail, but of course only for the URLs I send from that Firefox extension), a cleaned up page due to Readability, ability to read offline (assuming I use an IMAP client against Gmail) and permanent archiving of content, including what's been read vs. unread. The missing pieces are: The Send Page by Email Firefox extension seems to only send X bytes of a web page. Or some portion. So it limits my full text searchability. Where I don't have Firefox, I can only send the link, so no full text search at all in those cases. Instapaper looks like it meets most of my requirements (and bonus items). The only downside to me (personal preference) is that central storage is based on Instapaper vs. something more broad like Gmail, which as a generalized service and with Google behind it pretty much means it's permanent. I'm not too hung up on this, but I would definitely prefer to keep Gmail if possible. An upside of Instapaper is that it does the page clean-up as well as stores the entire page content, unlike my Firefox extension. Thoughts on addressing the gaps and improving this process further?

    Read the article

  • RFC 1918 address on open internet?

    - by longneck
    In trying to diagnose a failover problem with my Cisco ASA 5520 firewalls, I ran a traceroute to www.btfl.com and, much to my surprise, some of the hops came back as RFC 1918 addresses. Just to be clear, this host is not behind my firewall and there is no VPN involved. I have to connect across the open internet to get there. How/why is this possible? asa# traceroute www.btfl.com Tracing the route to 157.56.176.94 1 <redacted> 2 <redacted> 3 <redacted> 4 <redacted> 5 nap-edge-04.inet.qwest.net (67.14.29.170) 0 msec 10 msec 10 msec 6 65.122.166.30 0 msec 0 msec 10 msec 7 207.46.34.23 10 msec 0 msec 10 msec 8 * * * 9 207.46.37.235 30 msec 30 msec 50 msec 10 10.22.112.221 30 msec 10.22.112.219 30 msec 10.22.112.223 30 msec 11 10.175.9.193 30 msec 30 msec 10.175.9.67 30 msec 12 100.94.68.79 40 msec 100.94.70.79 30 msec 100.94.71.73 30 msec 13 100.94.80.39 30 msec 100.94.80.205 40 msec 100.94.80.137 40 msec 14 10.215.80.2 30 msec 10.215.68.16 30 msec 10.175.244.2 30 msec 15 * * * 16 * * * 17 * * * and it does the same thing from my FiOS connection at home: C:\>tracert www.btfl.com Tracing route to www.btfl.com [157.56.176.94] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms myrouter.home [192.168.1.1] 2 8 ms 7 ms 8 ms <redacted> 3 10 ms 13 ms 11 ms <redacted> 4 12 ms 10 ms 10 ms ae2-0.TPA01-BB-RTR2.verizon-gni.net [130.81.199.82] 5 16 ms 16 ms 15 ms 0.ae4.XL2.MIA19.ALTER.NET [152.63.8.117] 6 14 ms 16 ms 16 ms 0.xe-11-0-0.GW1.MIA19.ALTER.NET [152.63.85.94] 7 19 ms 16 ms 16 ms microsoft-gw.customer.alter.net [63.65.188.170] 8 27 ms 33 ms * ge-5-3-0-0.ash-64cb-1a.ntwk.msn.net [207.46.46.177] 9 * * * Request timed out. 10 44 ms 43 ms 43 ms 207.46.37.235 11 42 ms 41 ms 40 ms 10.22.112.225 12 42 ms 43 ms 43 ms 10.175.9.1 13 42 ms 41 ms 42 ms 100.94.68.79 14 40 ms 40 ms 41 ms 100.94.80.193 15 * * * Request timed out.

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • XP shared folders not accessible after BIOS changed

    - by stijn
    Here's what worked for over a year: PC A runs Windows 7, PC B runs Windows XP. Both are on the same subnet behind a router. A uses user account X, but logs in to PC B using the Administrator account. PC B is a Dell Precision 470. A known problem with these is that sometimes when plugging in their power cable they somehow loses all BIOS settings. This happened yesterday. After this happens Windows won't boot, because the default BIOS setting is 'RAID ON' while there is no RAID configured. No problem though, changing the BIOS settings to 'RAID OFF' makes it boot without problems. Note that in the meantime, nothing config-related was changed on machine A. It wasn't even on. Indeed after doing this, everything is fine. Everything includes all normal operations, remote desktop from PC A to PC B, running Synergy between A and B, accessing shared folders from B to A. But accessing the shared folders on B from A does not work any more. I tried pretty much everything I found via Google (fiddling with policies/registry kes/...) but no avail. > ping -a 192.168.2.2 Pinging A [192.168.2.2] with 32 bytes of data: Reply from 192.168.2.2: bytes=32 time<1ms TTL=128 > net view \\192.168.2.2 System error 5 has occurred. Access is denied. > net use /persistent:no K: \\A\myshare /user:A\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:192.168.2.2\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:USERNAME PASSWORD System error 86 has occurred. The specified network password is not correct. A solution to this would be great: I haven't been able to do any work since yesterday ;] update after taking the hard drive out of B and putting it in another Precision 470 with almost exactly the same hardware (at first sight, only the video card differs) the shared folders work.. Putting the disk back into A, same problem remains. Why does this depend on hardware, and more important, on which hardware?

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you. Thank you Anees Bakrain Only the ATI Radeon HD 4300/4500 Series adapter is displayed in the Device Manager, for that reason I have to assume that the onboard display adaptor is not active. All 40 drivers of Sarastro2 are up to date and the HDMI cable can not be the problem because both monitors displayed the boot sequence up to the moment when Windows 7 was loaded completely. This was the moment, when the Asus monitor lost its signal. Both connectors, HDMI and DVI are connected and removing the DVI connector would not solve my problem of running both monitors simultaneously. However, your suggestions shifted my seventy one year old brain into the next gear. The only question remaining is; “Why the signals to the Asus monitor stop after the sequence is complete”. The ATI Radeon HD 4300/4500 Series adapter seems to be capable of sending simultaneous HDMI and DVI signals, what is done during the boot sequence. Why do the signals change after the boot sequence is complete is the key question or der springende Punkt? Is this a correct assumption slhck?

    Read the article

  • VBA + Polymorphism: Override worksheet functions from 3rd party

    - by phi
    my company makes extensive use of a data provider using a (closed source) VBA plugin. In principal, every query follows follows a certain structure: Fill one cell with a formula, where arguments to the formula specify the query the range of that formula is extended (not an arrray formula!) and cells below/right are filled with data For this to work, however, a user has to have a terminal program installed on the machine, as well as a com-plugin referenced in VBA/Excel. My Problem These Excelsheets are used and extended by multiple users, and not all of them have access to the data provider. While they can open the sheet, it will recalculate and the data will be gone. However, frequent recalculation is required. I would like every user to be able to use the sheets, without executing a very specific set of formulas. Attempts remove the reference on those computers where I do not have terminal access. This generates a NAME error i the cell containing the query (acceptable), but this query overrides parts of the data (not acceptable) If you allow the program to refresh, all data will be gone after a failed query Replace all formulas with the plain-text result in the respective cells (press a button and loop over every cell...). Obviously destroys any refresh-capabilities the querys offer for all subsequent users, so pretty bad, too. A theoretical idea, and I'm not sure how to implement it: Replace the functions offered by the plugin with something that will be called either first (and relay the query through to the original function, if thats available) or instead of the original function (by only deploying the solution on non-terminal machines), which just returns the original value. More specifically, if my query function is used like this: =GETALLDATA(Startdate, Enddate, Stockticker, etc) I would like to transparently swap the function behind the call. Do you see any hope, or am I lost? I appreciate your help. PS: Of course I'm talking about Bloomberg... Some additional points to clarify issues raise by Frank: The formula in the sheets may not be changed. This is mission-critical software, and its way too complex for any sane person to try and touch it. Only excel and VBA may be used (which is the reason for the previous point...) It would be sufficient to prevent execution of these few specific formulas/functions on a specific machine for all excel sheets to come This looks more and more like a problem for stackoverflow ;-)

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • hosting 2 webapps under 1 apache/tomcat

    - by mkoryak
    I am trying to host multiple webapps under tomcat 6 behind apache2 via mod_jk. I am at my wits end with this. the problem i am facing that both domains seems to point to a single tomcat 'domain'. my server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Connector port="8010" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="natashacarter.com" appBase="webapps-natashacarter.com" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: worker.list=dogself,natashacarter worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.natashacarter.port=8010 worker.natashacarter.host=natashacarter.com worker.natashacarter.type=ajp13 finally my apache vhosts look like this: <VirtualHost 69.164.218.75:80> ServerName dogself.com DocumentRoot /srv/www/dogself.com/public_html/ ErrorLog /srv/www/dogself.com/logs/error.log CustomLog /srv/www/dogself.com/logs/access.log combined JkMount /* dogself </VirtualHost> and <VirtualHost 69.164.218.75:80> ServerName natashacarter.com DocumentRoot /srv/www/dogself.com/public_html/ ErrorLog /srv/www/dogself.com/logs/error.log CustomLog /srv/www/dogself.com/logs/access.log combined JkMount /* natashacarter </VirtualHost> when i log into manager webapp on both dogself.com and natashacarter.com, i can deploy to a context path on dogself, and that same contextpath will appear on natashacarter - so i know for a fact that this is the same tomcat domain. edit: just found this in my mod_jk log [Sun Feb 20 21:15:43 2011] [28546:3075521168] [warn] map_uri_to_worker_ext::jk_uri_worker_map.c (962): Uri * is invalid. Uri must start with / [Sun Feb 20 21:16:44 2011] [28548:3075521168] [info] ajp_send_request::jk_ajp_common.c (1496): (dogself) all endpoints are disconnected, detected by connect check (1), cping (0), send (0) but not sure why dogself wouldnt respond please help a brother out

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • How did what appears to be a virus get on my computer? (explanation of situation enclosed)

    - by Massimo
    My system is Windows XP SP3, updated with the latest patches. The PC is connected to a Cisco 877 ADSL router, which does NAT from the internal network to its single static public IP address. There are no forwarded ports, and the router's management console can only be accessed from the inside. I was doing two things: working on a remote office machine via VPN and browsing some web pages on the Cisco web site. The remote network is absolutely safe (it's a lab network, four virtual servers, no publicly accessible services and no users at all; also, none of what I'm going to describe ever happened there). The Cisco web site... well, I suppose is quite safe, too. Suddenly, something happened. Strange popups appears anywhere; programs claiming they're "antimalware", "antispyware" et so on begins autoinstalling; fake Windows Update and Security Center icons pop up in the system tray. svchost.exe began crashing repeatedly. Then, finally, after some minutes of this... BSOD. And, upon rebooting, BSOD again. Even in safe mode. Ok, that was obviously some virus/trojan/whatever. I had to install a new copy of Windows on another partition to clean things up. I found strange executables, services and DLLs almost anywhere. Amongst the other things, user32.dll and ndis.sys had been replaced. A fake software called "Antimalware Doctor" had been installed. There were services with completely random names or even GUIDs (!), and also ones called "IpSect" and "Darkness". There were executable files without an .exe extension. There were even two boot-class drivers, which I'm quite sure are the ones that finally caused the system to crash. A true massacre. Ok, now the questions: What the hell was that?!? It was something more than a simple virus! How did it manage to attack my computer, as I am behind a firewall and was not doing anything even only potentially harmful on the web at the time?

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • mixing different technologies using ARR reverse proxy

    - by Jaepetto
    I'm currently trying to put together a proof of concept on mixing various technologies onto one web site in order to ease migrations and add flexibility. The idea is to create one 'mashup' site behind an IIS 7.5 ARR reverse proxy. For the time being the ARR reverse proxy forwards all request to our main site. The request are as follow: client -> ARR: Get / ARR -> Server 1: Get / Server 1 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ...so far so good. Let's say, I want to add a new site (root of another server) as a subsite of my main website. a simple inbound rule does the trick: <rule name="sub1" stopProcessing="true"> <match url="^mySubsite(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false" /> <action type="Rewrite" url="http://server2/{R:1}" /> </rule> The requests now are: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ... still ok. The issue comes when the site on server2 sends a redirection (e.g. to a login page). In the case of SharePoint, it will redirect the user to: /_layouts/Authenticate.aspx?Source=%2F ...which does not exists: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 301: /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 301: /_layouts/Authenticate.aspx?Source=%2F client -> ARR: Get /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 404: Not Found Does anyone know a way write the outbound rule to rewrite the response from server 2 "301: /_layouts/Authenticate.aspx?Source=%2F" to "301: /mySubsite/_layouts/Authenticate.aspx?Source=%2FmySubsite%2F"?

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • Add Machine Key to machine.config in Load Balancing environment to multiple versions of .net framework

    - by davidb
    I have two web servers behind a F5 load balancer. Each web server has identical applications to the other. There was no issue until the config of the load balancer changed from source address persistence to least connections. Now in some applications I receieve this error Server Error in '/' Application. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. How do I add a machine key to the machine.config file? Do I do it at server level in IIS or at website/application level for each site? Does the validation and decryption keys have to be the same across both web servers or are they different? Should they be different for each machine.config version of .net? I cannot find any documentation of this scenario.

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Create and manage child name servers (glue records) within my domain?

    - by basilmir
    Preface I use a top level domain provider that only allows me to add "normal" third-party name servers (a list where i can add "ns1.hostingcompany.com" type entries... nothing else) AND "child name servers" which i can later attach to my parent account ( ns1.myowndomain.com and an ip address). They do not provide other means of linking up. I want to host my own server and dns, even with just one name server (at first). My setup: Airport Extreme - get's a static ip address from my ISP Mac Mini Server - sits behind the Airport and get's a 10.0.1.2 My problem is that i can't seem to configure DNS correctly. I added a "child nameserver" with my airport's external static ip address at the top level provider, so to my understanding i should have all DNS traffic redirected to my Airport. I've opened port 53 UDP to let the traffic in. Now, what i don't get is this. My Mini Server is sitting on a 10.0.1.2 address and i have setup dns correctly, with an A record to point and resolve my server AND a reverse lookup to that 10.0.1.2. So it's ok for "internal stuff". Here is the clicker... How, when a request comes from the exterior for a reverse lookup, does the server "know" ... well look i have everything in 10.0.1.2 but the guy outside needs something from my real address. I can't begin to describe the MX record bonanza... How do i set this "right"? Do i "need" my Mini Server to sit on the external address directly (i can see how this could be the preferred solution, being close to a "real" server i have in my mind). If not... do i need a PTR record on the 10.0.1.2 server but with the external address in there? My dream: I will extend this "setup" with multiple Mini's in different cities where i work. I want a distributed something (Xgrid comes to mind). PS. Be gentle, i've read 2 books and the subject, and bought both the Lynda Essentials and DNS and Networking to boot, still i'm far from being on top of things.

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • How to install RAID drivers on already installed Windows 7?

    - by happysencha
    64-bit Windows 7 Ultimate 6GB RAM Intel i7 920 Intel X25-M SSD 80GB 2,5" Club 3D Radeon HD5750 GA-EX58-UD4P Motherboard I've been running fine with Windows 7 installed on the SSD. I wanted to create an mirrored Raid-1 setup for backups using two hard disks, so I ordered two Samsung HD203WI. This motherboard supports two different RAID controllers, the Intel's ICH10R and Gigabyte's SATA2 SATA controller. There are 6 SATA ports behind the ICH10R and 2 SATA ports for the Gigabyte controller. I googled around and seemed that the ICH10R is a better choice and since then I've been trying to make it work. When I activate the [RAID] mode from BIOS, the Windows 7 gives BSOD exactly as described by this guy: "Windows 7 will start to boot, it gets to the screen where there are 4 colors coming together and it blue screens and restarts no matter what I do." First thing I did: turned off the RAID and booted to Windows and tried to install the SATA RAID drivers from Gigabyte. I launch the driver installation program and it gives "This computer does not meet the minimum requirements for installing the software" error. I then tried Intel's Rapid Storage Technology drivers (which apparently is the same as the one offered at Gigabyte's site), but it resulted in exactly the same error. I then detached the new Samsung hard disks from the SATA ports, but left the [RAID] enabled in BIOS. To my surprise, it still BSOD'd, so at this point I knew it is an OS/driver issue. Also, I tried with the Gigabyte's RAID enabled (while the ICH10R RAID disabled) and it booted just fine. So then I thought, that maybe I can't install the RAID drivers from within the OS. So I caused the BSOD on purpose once again, and then with ICH10R RAID activated and Samsung hard disks attached, I choose the Windows 7 Recovery mode in the boot menu. It sees some problem(s), tries to repair, does not succeed and does not ask for drivers (which I put on a USB stick) to install. I also tried to use the command-line in the recovery: "rundll32 syssetup, SetupInfObjectInstallAction DefaultInstall 128 iaStor.inf" but it gave "Installation failed." So I'm clueless how should I proceed. Do I really need to re-install Windows 7 and load RAID drivers in the Win7 setup? I don't want to install any OS on the RAID, the Windows 7 is and will be on the SSD. I just want to have a RAID-1 backup using those two hard disks. I mean why would I need to re-install operating system to add RAID setup?

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >