Search Results

Search found 16987 results on 680 pages for 'second'.

Page 449/680 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • ASA DHCP Relay configuration..

    - by Jeff
    I have locations in different cities, connected using 2 Cisco ASA devices. my main location, corporate, use the IP 192.168.1.x The second location, remote store, use the IP 192.168.3.x I have a DHCP server (192.168.1.254) at my corporate location. I have created a scope for the 192.168.1.x which works fine for the corporate location. I created a scope for the remote location (192.168.3.x) on my DHCP server and tried to configure the remote ASA DCHP Relay, on the remote ASA: I disabled the DHCP Server on the inside. I enabled DHCP Relay on the inside, with set route set at yes. I set the Global DHCP Relay Servers, specify up to four servers to which DHCP requests would be relayed. I added my DHCP, 192.168.1.254 I flashed these settings to the ASA and gave it a try, didn't do anything. am i missing something - forgetting something. not really sure what im doing wrong. DHCP Settings on remote ASA: dhcp-client update dns server both dhcpd dns 192.168.1.254 dhcpd ping_timeout 750 dhcpd domain JEWELS.LOCAL dhcpd auto_config outside dhcpd update dns both ! dhcpd address 192.168.3.2-192.168.3.33 inside ! dhcprelay server 192.168.1.254 outside dhcprelay enable inside dhcprelay setroute inside on my local ASA: i have two ACLs for UDP ports 67 and 68 permitting any inbound traffic from the remote locations IP ... dhcprelay timeout 120

    Read the article

  • problem booting crusty old windows XP

    - by Carson Myers
    I have an acer aspire laptop running Windows XP home. I believe I have some virus on it, I'm not sure--I mostly just run linux in a VM on it so I wasn't too worried. I'm not sure if that virus caused this problem. The laptop wasn't recognizing my USB hard drive for some reason so I decided to restart it. When it started up, it got past the memory test, past the boot screen, (but it paused right here on a blank screen for awhile) and flashed the desktop once (like it does just before the login screen) and then crashed. I got a quick BSOD and then it restarted. Then it tried to boot again, etc etc infinite loop of failure. Well, before trying safe mode, I disabled automatic restart on system crash so I could read the blue screen. There wasn't anything important on it, it said *** STOP: 0x00000000 (0xC0000000 0x,.... ) beginning physical memory dump physical memory dump complete That's not verbatim (obviously) but it didn't help me. so I booted in safe mode, and it stopped on the driver gagp30kx.sys and then restarted (and infinite loop of failure again). I burned a recovery CD and tried that. It loaded it, and I went into repair mode. I ran chkdsk and then disabled the AGP driver. Same thing on booting in safe mode except it stopped at mup.sys instead. I enabled the AGP driver again, and ran chkdsk again from the CD. It said it found problems but didn't say it fixed them. So I ran it a second time, and it said "performing additional checking or recovery" lots of times (I can't tell how many, they went above the screen top). I tried booting again and no luck. Every time I run chkdsk after trying to boot again it says it found and fixed more errors. I think it might be whatever driver is after the AGP driver, but I don't know what it is or how to find out. Can anyone help me fix this?

    Read the article

  • How powerful of a PC do you need to edit HD videos?

    - by Xeoncross
    I have a Core2Quad Q8200 (2.3GHz) with 4GB of RAM, a 512MB PCIe video card, and a SATA-2 HD. Yet it still isn't fast enough to edit 720i/p video in Sony Vegas or Adobe Premiere/Aftereffects. My RAM usage never peaks over 1.6GB, but my CPU cores make it to 95% quick! Right now the preview panes in all these programs lag to bad to actually work on the videos. I get to see 1-3 frames every second or two! So how fast do I have to go? At what point will my CPU be fast enough to actually edit these videos? I have to assume that regular people and their regular sub $2k computers can actually work with this footage. Another way to answer this is, how fast is the PC you used to edit videos? Update: I'ts worth noting that now that I have Adobe Pre/AF CS4 I am more interested in getting that working than my older Vegas 6. If you didn't have to re-run RAM preview every, single, time you made one change it would be my answer. But since I like to test many filters and effects before choosing one - I have to re-render a 1-sec section of footage over-and-over and it drives me nuts waiting. Perhaps a motherboard with Dual Xeon chips or something would be able to handle this. It would probably be as much as a dual-crossfire setup and would also speed up other applications.

    Read the article

  • SQL Server 2008 cluster freezing

    - by Ed Leighton-Dick
    We have run into a strange situation in which a SQL Server 2008 single-node cluster hangs. As background, we are rebuilding a Windows Server 2003/SQL Server 2005 two-node cluster using Windows 2008 and SQL Server 2008. Here's the timeline: Evicted the passive node (server B) from the Windows 2003/SQL 2005 cluster. The active node now functions as a single-node cluster with no problems. Wiped server B's disks and installed Windows 2008 and SQL Server 2008 as a single-node cluster. Since we do not want to the two clusters to communicate yet, we left the cluster's private network "heartbeat" adapter unconfigured. The cluster comes up and functions normally. Moved all databases to the new cluster. Cluster continues to function normally. Turned off server A (old cluster) in preparation for rebuilding as the second node of the new cluster. SQL Server instance on server B (new cluster) locks up, even though it should have no knowledge of or interaction with server A. Restarted server A. SQL Server instance on server B (new cluster) immediately begins working again. Things we have tried: The new cluster's name responds to ping and NETBIOS requests, even while the SQL Server is hung. We have confirmed that no IP address is assigned to the old heartbeat adapter, and it is not pulling an IP address from DHCP. Disabling the heartbeat's network card has the same effect. No errors were generated in any logs - Windows or SQL. When the error first occurred, it sat in the hung state for quite some time (well over 10 minutes) before anyone figured out what was going on. This would seem to eliminate any sort of normal cluster timeout in which it would have been searching for the other node (even if one had been configured). Server B is running Windows 2008 SP2, fully patched, and SQL Server 2008 SP1 CU7 (10.0.2775).

    Read the article

  • Oracle with Kerberos authentication and Windows 2003 Server as KDC

    - by Supaplex
    I am running Oracle 10.2 on a Windows 2003 Server SP2 which is also the domain controller on the network. I wish to switch authentication method from NTS to Kerberos. I have spent a lot of time trying to configure Oracle with Kerberos authentication from the Oracle Advanced Security option from the Net Manager utility. I have disabled NTS so Kerberos is promoted as the preferred authentication method. But as soon as the configuration is saved from Net Manager and I restart the Oracle server service, Oracle will not start. I don't know what Oracle is complaining about, because I don't know where to look for the Oracle error log. My first question is: how can I figure out what's bugging Oracle? My second question: is there a good tutorial for setting up Oracle on a Windows 2003 with Kerberos Authentication, where the Windows 2003 Server is the KDC? Maybe there is a book I can get? I have read Oracles own guide, but it is mostly for Linux/Unix. Thanks a lot!

    Read the article

  • How can i link a oracle user to a business objects user

    - by Robert Speckmann
    I have a problem with linking the oracle user to a business objects user. I will try to explain it as detailed as possible; I have a Oracle database (10g) where a couple of users are defined. These users can query on information with application X. Those records will then be written into the oracle database. The records that is written into the database has a ID that links to the person that has run the query. I also have a active directory in wich a couple of users are made; testuser1, testuser2. When those users log on, and want to load a report in Business Objects XI i want them to see the information that was created when the report was activated by that same user that had runned the query before with application X. The name of the person in the active directory and the name in the oracle database are not the same but i dont think that would be a problem in this stage. So the steps i took: First, i run a report in application X (with a account prodpim_rs) wich fills my Oracle database with a record. The second step is logging on as testuser1 (from the AD) and then login on Business Objects XI with the account. Now i want to load a report with the information in my Oracle database. So the prodpim_rs user and the testuser must have a link between them. I am wondering how to forfill this. Can i link the account, wich is made in a Oracle database, with the user of BO wich is linked to my AD? Thank you in advance for your reply Robert

    Read the article

  • Weird routing issue

    - by Joel Coel
    I'm having some weird internet problems on campus. I know it's something simple, but it's a case where I need another set of eyes. I think I can explain the problem best by posting a tracert: Tracing route to google.com [74.125.45.147] over a maximum of 30 hops: 1 3 ms 3 ms 3 ms 192.168.8.1 2 1 ms 1 ms 1 ms elissaemily-pc.york.edu [192.168.10.5] 3 2 ms 2 ms 2 ms rrcs-76-79-19-33.west.biz.rr.com [76.79.19.33] 4 31 ms 3 ms 2 ms ge-1-1-0.lnclne00-mx41.neb.rr.com [76.85.220.109] 5 20 ms 17 ms 17 ms ge-7-3-0.chcgill3-rtr1.kc.rr.com [76.85.220.137] 6 20 ms 20 ms 19 ms ae-5-0.cr0.chi30.tbone.rr.com [66.109.6.112] 7 19 ms 19 ms 24 ms ae-1-0.pr0.chi10.tbone.rr.com [66.109.6.155] 8 26 ms 24 ms 24 ms 74.125.48.109 9 23 ms 24 ms 21 ms 216.239.46.246 10 39 ms 39 ms 55 ms 209.85.242.215 11 39 ms 39 ms 39 ms 209.85.254.243 12 39 ms 40 ms 96 ms 209.85.253.145 13 39 ms 39 ms 39 ms yx-in-f147.1e100.net [74.125.45.147] Trace complete. Note the second entry in there. Not only is the host name a student's computer, but the ip address doesn't exist. Dhcp shows that host as having a different address and you can't ping any 192.168.10.5. Yet somehow it's routing packets for us (and not very well, either — things are slow right now). The basic network routing table looks like this: Destination Subnet Mask Gateway --------------------------------------- Default Route -- 10.1.1.5 (our firewall) 10.0.0.0 255.0.0.0 -- 192.168.8.0 255.255.252.0 --

    Read the article

  • Parallel port blocking

    - by asalamon74
    I have a legacy Java program which handles a special card printer by sending binary data to the LPT1 port (no printer driver is involved, the Java program creates the binary stream). The program was working correctly with the client's old computer. The Java program sent all the bytes to the printer and after sending the last byte the program was not blocked. It took an other minute to finish the card printing, but the user was able to continue the work with the program. After changing the client's computer (but not the printer, or the Java program), the program does not finish the task till the card is ready, it is blocked until the last second. It seems to me that LPT1 has a different behavior now than was before. Is it possible to change this in Windows? I've checked BIOS for parallel port settings: The parallel port is set to EPP+ECP (but also tried the other two options: Bidirectional, Output only). Maybe some kind of parallel port buffer is too small? How can I increase it?

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • Put one monitor of a dual monitor windows system into standby

    - by Psycogeek
    Standby not Disabled! When running 2 monitors on windows 7 or Windows XP, I would like to be able to put one of the monitors at a time into standby. The method can be manual. When running 2 monitors , the second monitor is not always needed, shutting off the monitors own power switch will turn off the monitor, that does work Ok. Problems with that are , the delay with the monitor logo at turn on, and the power switch is not very accessable, and the switch might not live forever turning it on and off so many times. Using disable methods like devcon, WIN-P and Display, causes all the windows to properly move to the other monitor. While that is what a person would want to happen so they can get hold of the windows, that is not what I want to happen, and some things on the other monitor have to be re-arranged after a re-enable. By putting it into standby mode, nothing changes other than the monitor going into standby. Disconnecting the DVI cable still can cause the system to (properly) shift all the windows over to the one monitor, just like any of the disable methods do. That makes a mess of the windows, and is so unacceptable, that I would prefer to leave the monitor on, wasting power and the hardware, when it could easily go into standby for some time. For both monitors I am using a "MonitorOff" program that puts both monitors into standby, but I can not find a utility that will put only ONE monitor into standby for the windows system. If someone comes along and suggests "ultramon" you must know for a fact that it will put One of either of the monitors into actual standby. And it does not really suit me to use ultramon, I tested it (it was nice) and I did not feel that it was a program I wanted. The 2 monitors are running off of an ATI 4890 card, they are both hooked up DVI-I, the OS is both Windows 7 (primary) and Windows XP. In addition it would also be interesting to have seperate standby activity timers, and follow mouse kind of standby changes, but any manuel method , shortcut, batch , tray, or gadget kind of operation would be a good start.

    Read the article

  • How to deal with DELL support system?

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). It is Dell India Support System. So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • Is there any way to customize the Windows 7 taskbar auto-hide behavior? Delay activation? Timer?

    - by calbar
    I'm becoming increasingly frustrated with the way Windows 7 handles showing a hidden taskbar. It's incredibly over-eager to pop out and obscure what I'm really trying to interact with, requiring me to move the mouse away, wait for it to auto-hide again, then resume what I was doing but more deliberately. After closely examining the behavior, it appears that a hidden taskbar "peeks out" from the edge by 2 or 3 pixels, and slowly moving your mouse into this area activates it; you don't even need to touch the edge of the screen. I would love it if there was a way to customize or change this behavior. Ideally, the taskbar would only pop out if you are actively "pushing" the edge of the screen it is hidden on. So activation only occurs once you've reached the screens edge and continue to move the mouse past a customizable threshold. Alternatively, a simple activation delay would suffice as well. So only if the mouse remains in that 2-3 pixel area (a.k.a. on the taskbar) for greater than a customizable amount of time does it pop out again. This would only be a fraction of a second. Often times the cursor simply "careens" off the edge of the screen while trying to focus on something nearby. Anyway, if there are any registry settings or utilities that can achieve either of these effects, that would be great! Thanks for your help.

    Read the article

  • Force netsh/arp binding multicast IP addres with specific MAC address

    - by Olivier
    I would like to setup an binding from an IP address to a MAC address using netsh. Goal is to bond an IP address which is a multicast address (224.224.x.y) to a given MAC address (which is NOT the calculated one from the multicast IP address : 01:00:5e:X:Y:Z It used to work with Windows XP (was it a bug that used to be "perfect" for my needs?), but Windows 7/8/8.1 forces the MAC address to the calculated one instead of letting me put what I want! (http://nettools.aqwnet.com/ipmaccalc/ipmaccalc.php shows MAC address calculation for multicast IP address) Thus I'm doing the following. Listing existing mappings: netsh.exe interface ip show neighbors "Ethernet" Interface 12 : Ethernet Internet address Physical address Type 224.0.0.22 01-00-5e-XX-YY-ZZ static Then adding my interface mapping manually: netsh.exe interface ip add neighbors "Ethernet" "224.xxx.yyy.zzz" "00-80-EE-UU-VV-WW" Finally, listing again my mappings: netsh.exe interface ip show neighbors "Ethernet" Interface 12 : Ethernet Internet address Physical address Type 224.0.0.22 01-00-5e-XX-YY-ZZ static **224.xxx.yyy.zzz 01-00-5e-UU-VV-WW static** As you can see, the MAC Address of the second entry (the one I just made) has been dynamically replaced by the calculated MAC Address corresponding to my IP Address... Calculation is done as follow (and displayed in hexa): UU=(xxx-128) VV=yyy WW=zzz But I don't want that behavior. My IP address and MAC address cannot be changed, and I must associate them accurately. Does anybody know how to disable MAC address substitution/calculation in netsh? Thanks, Olivier.

    Read the article

  • Domain with www pointing yo another site

    - by ntechi
    Recently I started Multi Sites on my VPS which is having Centos 64 bit. Currently I am having two sites live and each is working fine, Now the problem is in the URL I have the following sites: http://mbas.co.in http://u-k.in mbas was the very first site on my VPS Now in URL if I type http://mbas.co.in or http://www.mbas.co.in both redirect to my mbas website But for the second website, If in URL I type http://u-k.in then it redirects to the u-k website correctly but if I type http://www.u-k.in then it redirects me to mbas website. You can try that I have configured my DNS in this way, see the image http://i55.tinypic.com/14vlpxl.jpg And my Multi Site code is this <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/www.mbas.co.in ServerName mbas.co.in ErrorLog logs/mbas.co.in-error_log CustomLog logs/mbas.co.in-access_log common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/u-k.in ServerName u-k.in ErrorLog logs/u-k-error_log CustomLog logs/u-k-access_log common </VirtualHost>

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • redmine gives 404 error after installation

    - by Sankaranand
    I am using Debian squeeze with nginx and mysql. After raking db and loading default data to redmine. When i try to visit redmine in a browser, http://ipaddress:8080/redmine, I get a 404 error Page not found The page you were trying to access doesn't exist or has been removed. My nginx configuration file, below: server { listen 8080; server_name localhost; server_name_in_redirect off; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } location ^~/phpmyadmin/ { root /usr/share/phpmyadmin; index index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/$fastcgi_script_name; } location /redmine/ { root /usr/local/lib/redmine-1.2/public; access_log /usr/local/lib/redmine-1.2/log/access.log; error_log /usr/local/lib/redmine-1.2/log/error.log; passenger_enabled on; allow all; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } location /phpMyadmin { rewrite ^/* /phpmyadmin last; } I don't know what the problem is - this is my second attempt to install redmine in Debian with nginx.

    Read the article

  • WRTU54G-TM router with 3rd party firmware; Can custom firmware include stock binary portions?

    - by dlamblin
    I've been doing a lot of reading online about the Linksys WRTU54G-TM router model that I now own. It seems getting a custom firmware onto it is not a problem. But no one is talking about retaining the Voip features (yet). So far they're all disappointed that it's not a SIP machine and used GSM over IPSec. Personally I don't care about using it with non-t-mobile. If I take the original firmware, shouldn't I be able to extract it, and it's SquashFS image, and then move all of the t-mobile specific binaries for enabling the calling features over to a custom firmware installation (maybe OpenWRT)? You might ask why, and the reason is, that if I do this I could retain my calling features, which I do want, and ssh to the router and use it to run additional software, as any OpenWRT router could do. Does anyone know if this can be done, and how the firmware's binaries could be gotten at and installed correctly? Update I have found someone working on 3rd party WRTU54G-TM firmware. I am still interested in my second part of the questions, that is can't the stock firmware images be pulled apart and have the close-source, if any, binary kernel modules moved into another more flexible custom firmware?

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • Dual VGA monitors with Quadro FX 580?

    - by dentrasi
    I have a machine with a Quadro FX 580 card (DVI and two Displayport). Attached to it are two 19" Acer screens, which are both (annoyingly) VGA. The first one works perfectly, with a DVI-VGA adaptor. The second one doesn't work. It's got a VGA cable, which goes into a VGA-DVI converted, which then goes into a DVI-Displayport converter. Initally, I was getting 'Cable Unplugged' on the screen, and it couldn't be seen by Windows or the nVidia control panel. After swapping the VGA-DVI adaptor (which works perfectly on another machine), Windows can now see the monitor. The nVidia panel sees the model and native resolution, but I get a constant 'No Signal' error. Switching to the other Displayport makes no difference. I suspect that the card is seeing a DVI connection plugged into it (nVididaCP shows the monitor as having a DVI connection), and it only sending out a digital signal because of this. Does anyone know of a solution (other than trying to get a Displayport-VGA adaptor), or of a way to force the card to see it as VGA? Thanks, ~Dentrasi

    Read the article

  • How To Remove Bottleneck with Squid Caching Proxy

    - by Volomike
    I'm more of a LAMP web developer trying to help the sysop. When I joined a project, I inherited some old PHP spaghetti code. Some of that code is that it goes out to a third-party website (let's call it thirdparty.com) and pulls down content with an HTTP-GET request. Unfortunately, the way the code is designed, it needs to do this several times a minute. When we looked at the bottlenecks on the server with 'netstat -a', we saw that connections to thirdparty.com were constantly running when this content would be plenty fine to be gathered once a day. What I need to know is if the Squid Proxy Caching Server is the solution we need? I'm guessing that this might let us have it pretend to be thirdparty.com on the network. If the web server needs to query thirdparty.com, it hits Squid instead. Squid can then determine whether it needs to supply content from cache or if it needs to go to thirdparty.com for fresh content. Is this the solution we need? And second, is this easily configured and only to cache thirdparty.com requests?

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • multiple php compiler on single apache installation

    - by getmizanur
    elloo, i have some old php scripts which runs on php-5.2.x and the current server has php-5.3.x. to get around this problem,i have got two options one is to downgrade php-5.3.x or install php-5.2.x and php-5.3.x at the same time where php-5.2.x serve cgi script. i have decided go for the second option i have followed this tutorial and i can get most of it working however except execution of shell script which selects php-cgi version. i cannot get apache to execute this script. how do i get apache to execute #!/bin/sh # you can change the PHP version here. version="5.2.6" # php.ini file location, */php-5.2.6/lib equals */php-5.2.6/lib/php.ini. PHPRC=/etc/php/phpfarm/inst/php-${version}/lib/php.ini export PHPRC PHP_FCGI_CHILDREN=3 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_MAX_REQUESTS # which php-cgi binary to execute exec /etc/php/phpfarm/inst/php-${version}/bin/php-cgi my apache vhost.conf <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 </Directory> </VirtualHost> can some one tell me what am i doing wrong? thanks in advance. solution: if i did a2dismod php5 then the above configuration worked. when a2enmod php5 had been activated, apache was executing php5.3 instead of php5.2 even after telling apache to execute php5.2 shell script. to solve my problem, i had to change my virtualhost configuration <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www DirectoryIndex index.php <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 <FilesMatch "\.php"> SetHandler php-cgi </FilesMatch> </Directory> </VirtualHost> presto, it started working.

    Read the article

  • How can I view the binary contents of a file natively in Windows 7? (Is it possible.)

    - by Shannon Severance
    I have a file, a little bigger than 500MB, that is causing some problems. I believe the issue is in the end of line (EOL) convention used. I would like to look at the file in its uninterpreted raw form (1) to confirm the EOL convention of the file. How can I view the "binary" of a file using something built in to Windows 7? I would prefer to avoid having to download anything additional. (1) My coworker and I opened the file in text editors, and they show the lines as one would expect. But both text editors will open files with different EOL conventions and interpret them automagically. (TextEdit and Emacs 24.2. For Emacs I had created a second file with just the first 4K bytes using head -c4096 on a linux box and opened that from my windows box. I attempted to use hexl-mode in Emacs, but when I went to hexl-mode and back to text-mode, the contents of the buffer had changed, adding a visible ^M to the end of each line, so I'm not trusting that at the moment. I believe the issue may be in the end of line character(s) used. The editors my coworker and I tried (1) just automagically recognized the end of line convention and showed us lines. And based on other evidence I believe the EOL convention is carriage return only. (2) return only. are able to recognize and To know what is actually in the file, I would like to look at the binary contents of the file, or at least a couple thousand bytes of the file, preferablely in Hex, though I could work with decimal or octal. Just ones an zeros would be pretty rough to look at.

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >