Search Results

Search found 18096 results on 724 pages for 'let'.

Page 520/724 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • How to setup RAID 1 with Intel RST on an existing Windows 7 system?

    - by instcode
    I'd like to setup RAID-1 using Intel Rapid Storage Technology on my Windows 7 64-bit system. I have an 1TB SATA HDD with Windows 7 system installed on the first primary partition (leftmost, ~200GB). The rest of this HDD is unallocated (~800GB). I bought another 2TB SATA, then created a primary partition (leftmost, ~500GB) and filled my data in. The rest of this HDD is unallocated (~1.5TB). A quick disk layout (XXX is the unallocated region): HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXXXXXXXX ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXXXXXXXX ] Now, I want to create a 500GB RAID-1 partition (I'm not sure if using "partition" is correct here) on the rightmost of the 2 HDDs above without losing any existing data from both disks. Here is the expected layout: HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXX | 500GB D:\ DATA - RAID-1 ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXX | 500GB D:\ DATA RAID-1] Let's not concern about data lost, is it possible to have that final layout using Intel RST? Previously, I tried this layout using dynamic disk & software RAID from Windows and it worked as expected, however, it's quite ugly in resynching after an OS failure that I don't want. If yes, is there a way to keep the data on existing partitions untouched or, at least, it should keep the SYSTEM partition safe (I'm okay if the PROGRAM partition has to be gone.)? Well, are there any strict/special steps I should follow when using the Intel RST manager in order to achieve that? If none of those questions above are "Yes", could you please suggest some other possible layouts that leave the C:SYSTEM partition untouched?

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • How can I confirm that burn process was completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. my PC is able to read the CD and CD is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

  • Microsoft Excel not graphing

    - by SmartLemon
    Im not sure if this is a math question or a su question. The experiment was relating the period of one "bounce" when you hang a weight on a spring and let it bounce. I have this data here, one being mass and one being time. The time is an average of 5 trials, each one being and average of 20 bounces, to minimize human error. t 0.3049s 0.3982s 0.4838s 0.5572s 0.6219s 0.6804s 0.7362s 0.7811s 0.8328s 0.869s The mass is the mass that was used in each trial (they aren't going up in exact differences because each weight has a slight difference, nothing is perfect in the real world) m 50.59g 100.43g 150.25g 200.19g 250.89g 301.16g 351.28g 400.79g 450.43g 499.71g My problem is that I need to find the relationship between them, I know m = (k/4PI^2)*T^2 so I can work out k like that but we need to graph it. I can assume that the relationship is a sqrt relation, not sure on that one. But it appears to be the reverse of a square. Should it be 1/x^2 then? Either way my problem is still present, I have tried 1/x, 1/x^2, sqrt, x^2, none of them produce a straight line. The problem for SU is that when I go to graph the data on Excel I set the y axis data (which is the weights) and then when I go to set the x axis (which is the time) it just replaces the y axis with what I want to be the x axis, this is only happening when I have the sqrt of "m" as the y axis and I try to set the x axis as the time. The problem of math is that, am I even using the right thing? To get a straight line it would need to be x = y^1/2 right? I thought I was doing the right thing, it is what we were told to do. I'm just not getting anything that looks right.

    Read the article

  • How to fix GMail time stamps in Outlook?

    - by SWB
    One of my email accounts is hosted at an ISP with unreliable IMAP support, and I can't change it. Fortunately, I have my personal email set up on Google Apps for Domains, so I created another GMail account there and turned on GMail's features that allow me to send and receive mail through the ISP account using GMail ("Send mail as" and "Get mail from other accounts" in GMail settings on the Accounts tab). I'm now using Outlook to retreive mail from the GMail account through IMAP, which in turn is retreiving mail from the ISP account through POP3. This basically works great, except for one very significant issue: Prior to setting this up, I already had several months of mail in the ISP account that I had been accessing via IMAP. GMail grabbed all of this mail via POP3 at, let's say, noon on April 5. In GMail's web interface (and on my iPod touch, and in Mozilla Thunderbird), all is well: the messages are all shown with their original time stamps. But when Outlook downloads these messages from GMail via IMAP, the time stamps are all set to noon on April 5 (the time GMail downloaded them from the ISP via POP3). That's not good, especially since we're talking about hundreds of messages here over a time span of several months. How can I fix this and get Outlook to display the original time stamps?

    Read the article

  • Change source address based on destination IP

    - by hgj
    We have several "router" machines that gather a lot of external IP addresses on the same host and redirect, NAT or proxy the traffic to the internal network. They also act as routers for the machines on the internal network. This works fine, however I am unable to make the routing table, so I can change the source address, based on the destination a machine from the internal network want to access. Let's say I have a router, that has public addresses P1 (5.5.5.1/24) and P2 (5.5.5.2/24). All traffic goes through P1, but if necessary, the host is reachable on P2 too. This looks like this and works fine: > ip addr ... 1: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether aa:bb:cc:dd:ee:11 brd ff:ff:ff:ff:ff:ff inet 5.5.5.1/24 brd 5.5.5.255 scope global eth1 inet 5.5.5.2/24 brd 5.5.5.255 scope global secondary eth1:p2 ... Now I want to use P2 as the source address, if I want to access the Google DNS service for example (8.8.8.8). So I add a row in the routing table like: > ip route add 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 > ip route ... default via 5.5.5.254 dev eth1 5.5.5.0/24 dev eth1 proto kernel scope link src 5.5.5.1 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 ... But this does not work. If I ping 8.8.8.8, the host still uses P1 as the source address, and does not use P2 at all for outgoing connections. Am I doing it right? I guess not...

    Read the article

  • Datacentre Rack naming convention with flexibility for reassignment of server roles

    - by g18c
    We are just shifting across to a new rack and until now have used names of cartoon characters. This is not going to work anymore, and need a better naming convention. Physically i would like to name the servers by location, and then have an alias as to its actual function/customer, i.e. Physical name LONS1R1SVR1 meaning London, suite 1, rack 1, server 1 Customer Alias Since the servers can be reassigned from time to time, for the above physical server name, i would have an alias as a column in a spreadsheet, that would be set to the customers host-name, i.e. wwww.customerserver1.com Patching For patching, I am looking at labeling up the physically connections, i.e. LON1S1R1SVR1-PWR1 LON1S1R1SVR1-PWR2 LON1S1R1SVR1-ETH0 LON1S1R1SVR1-KVM Ultimately if i am labeling cables, I really want to avoid putting LON1S1R1SQLSVR on any patch cord in case the server gets formatted and changed from a SQL server to a WWW server which would need to relabel all the patch cords also. In addition, throwing in virtual machines, i have got confused very quickly. I appreciated that it may be confusing having a physical host-name and customer alias. Please let me know what you run with and any other standards or best practices that i can follow?

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • How to safely move where itunes saves music/iphone apps/and meta data to another internal Drive?

    - by GingerLee
    In the past, when I have moved my itunes data from one computer to another, I usually just follow these steps: Copy the contents of two folders: %USERPROFILE%\Music\iTunes %USERPROFILE%\AppData\Roaming\Apple Computer 1) Install iTunes on the new computer, start it and close it (don't let it search for music). 2) Copy all the files in the above folders from old PC to new PC. 3) Start iTunes and authorize the new computer (and deauthorize old one). 4) Before syncing, update all iphone apps to current versions on both my iphone and in itunes. 5) The Sync. The above steps always work for me, and basically Itunes on my new PC works exactly as it did on the old PC. My Question: In the hopes of bybassing the above steps in the future, I would like to just have Itunes use another internal Drive that I use for file storage (e.g. D:/) as the path for the above two directory? Then if I move to new PC again, I could just setup itunes to use the correct path. Is that possible yet with minimal implications? If so how?

    Read the article

  • Rackspace Ubuntu 12.04 server stuck in initramfs after kernel upgrade

    - by Znarkus
    Can't boot after I did a aptitude full-upgrade and let it update menu.lst (did a diff first and it looked good). This is what I've done so far in the BusyBox shell: mkdir /tmp/xvda1 mount /dev/xvda1 /tmp/xvda1 chroot /dev/xvda1 nano /boot/grub/menu.lst This file looks like this: title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro quiet splash initrd /boot/initrd.img-3.2.0-31-virtual title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual (recovery mode) root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro single initrd /boot/initrd.img-3.2.0-31-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-generic titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-generic titleChainload into GRUB 2 root(hd0,0) kernel/boot/grub/core.img titleUbuntu 12.04.1 LTS, memtest86+ root(hd0,0) kernel/boot/memtest86+.bin From what I remember, the upgrade added the UUID= string. Should I remove these? Or rather, how do I get my system back online again? Thanks. Update: Seems like I can't even edit the file. [ Error writing /boot/grub/menu.lst: Read-only file system ]

    Read the article

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

  • Google 2-step verification: Should my phone know my password? [closed]

    - by Sir Code-A-Lot
    Hi, Just enabled 2-step verification for my Google account. I have installed Google Authenticator on my Android phone, and I set up an application specific password for the Google account associated on my phone. This works great when just using installed apps like Gmail, Calendar and Google Reader. But if I want to access Google Docs, Google Tasks or any other website that requires a Google login, I don't seem to be able to use a application specific password. I have to use my real password and then use Google Authenticator to make a code for the next step. This means if my phone is stolen, revoking the password to my phone is pointless. The phone have already been verified, and all that is needed is my password, which the phones browser will have remembered. I realize that I can take measures to ensure the phones browser doesn't remember my password, but that's just not convenient at all. Am I missing something, or is there no elegant solution to this? Should I just let my phone know my real password? As I see it, being able to login with application specific passwords on websites (which apparently isn't possible) is the only way I can revoke my phones access in a meaningful way.

    Read the article

  • IDE compatability with SATA image

    - by Ormis
    We had an old CNC machine's hard-drive fail recently. The hard-drive is an old 1275MB IDE (Seagate) and there were defiantly bad sectors on it. I was able to image the contents of the drive onto a drive in my computer before it became completely unusable (I used DD, replacing all bad sectors w/ 0s). After running a couple chdsks, the SATA drive will boot off of the image. This is great, but there's one problem. The CNC machine old and requires IDE, I've attempted to copy the currently booting image off of the SATA drive and onto IDE drives numerous times in numerous ways and every time I do so the machines return that a boot device cannot be found. Some other information: The file system is fat32, running windows 98 The SATA drive is an 80gb drive I have tried copying the image to three 20gb and two 80gb IDE drives I have checked the jumper on the back of the IDE drives when using them If anyone has any ideas, questions, suggestions, etc. please let me know. P.S. I would just put a fresh install of win98 on the machine if i had the installation media (so that's out of the question). And if it comes to it, this is my last week working here, so I'll leave that to my co-worker. EDIT: Also, I have tried using Clonezilla as well as straight up DD to copy the image to the IDE drives.

    Read the article

  • Windows 7 hangs on black screen for a while after log in

    - by steini
    I get the welcome screen. I click on my user and get the "logging on" screen. After that all I get is a black screen with a mouse cursor. I can't even start task manager. No ctrl+alt+del or ctrl+shift+escape. It stays like this for about 10 minutes, then the desktop finally starts loading. According to the hdd led on my case, windows isn't even trying to access the hard drive for that whole time. It's just hanging doing nothing it seems. What I have tried: Uninstalled video driver and removed leftovers with driver sweeper Disabled all startup programs and non microsoft services Loaded "last known good configuration" Ran the alleged "black screen fix" from prevx against my best judgement (don't really like running random exes without knowing what they do at all) None of that works. I can boot into safe mode normally. My specs: i7 920 Gigabyte X58-UD3R Gigabyte HD5870 1GB 12GB Mushkin Silverline 1333MHz Windows 7 Ultimate x64 I'm also having another problem which I suspect is related. After I have gotten the computer up and running, everything works perfectly, but when it's been on for a while it starts behaving strangely when changing display modes. When I start up a game or anything that changes the screen resolution the computer freezes for about a minute every time until I reboot again. I think this is probably related to the black screen problem. Just thought I'd check to see if anyone has had the same problem. Let me know if I should post any more details about my system to help diagnose this. Thanks in advance.

    Read the article

  • ownership of hard drive recoveery of files Windows 7

    - by Jeff
    Here is the issue. I have an old laptop that died that was running xp. I now have a win 7 laptop. I need to get the files off the old drive. Win 7 will not let me take ownership of the drive. I can runn the comand prompt in regular mode and do a dir on the drive that shows up as Q: drive in regular mode. I gives me the volume as c and the serial number. I can not take ownership in regular mode with the comand prompt or other means. Microsoft site says use safe mode with networking. So I go to safe mode the drive shows up as G: in safe mode. I use the comand prompt Takeown /f G: and get the device is not ready error. I am at a loss. All I want is to retrieve my files from this drive. Any ideas or sugestions. I don't see how you can dir and get some info in one mode and not acces it in another. I have to get ownership and permissions fixed to get in the drive to get my files. Thanks in advance. I might add that I am using a usb3.0 to ide/sata cable adapter. Software came with the device but I can't make heads or tails out of the manual to know if any of the software can help me. The soft ware is PCClone Ex lite, and Clone Drive Soft ware

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • Locked out of our Comcast Business Gateway for seemingly no reason

    - by Tyler
    A little backstory... Last fall we migrated our ISP from AT&T to Comcast at one of our offices. At that time, we received a new modem/router from Comcast and we configured everything to our liking. We've never really had very many issues with the router aside from having to restart it every once in a while. Here's the problem... About three months ago I changed the password on the router from the default. After that, I logged into the router several times to make changes with no issue. During May I logged into the router to add two new static routes, no problems. A week ago, I tried to log into the router and could not. I tried the non-default password that I changed it to, the default, anything and everything I could think of and no luck. I restarted the router on Monday thinking it may just be locked up, but after the restart it would still not let me log in. This router is at our other office about 2 hours from here and I want to avoid having to drive down there and reset to factory defaults, reconfigure, etc. Any ideas?

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • Broken Vista. Can't open Windows settings.

    - by serena
    My neighbor has a Lenovo laptop with Windows Vista Home Basic. She's a noob and just uses the laptop for internet purposes. She said she had to close down Windows improperly (sometime ago, maybe 6 months) because of system freeze. She realized there's something wrong with her Windows when she tried to open Windows update settings. I took a look at the system and determined the following errors: When I click on Windows Updates, a bare white window opens for a sec. and closes immediately. When I try to open Computer Properties, the same thing happens. (Windows+Break doesn't work either.) When I try to open Bluetooth settings, the same thing happens. So Vista won't let me open any Windows settings, but installed programs work correctly (games, applications etc.). She has no Windows Vista discs since the laptop came with preinstalled genuine Vista. She also has no recovery discs. I don't think there is a system restore point for the time the system was stable. Now what can we do to solve this big problem?

    Read the article

  • How to safely send newsletters on VPS (SMTP) w/ non-hosted domain as "From" email?

    - by Andy M
    Greetings, I'm trying to understand the safest way to use SMTP. I'm considering purchasing a second virtual server mainly for email sending, on which I will set up PHPlist (a free open-source mailing program), so we have the freedom to send unlimited newsletters (...well, 10,000 per day at least, which requires a VPS rather than shared hosting). Here's my current setup with a paid mass-mailing software: I have a website - let's call it MyHostedDomain.org. I send newsletters with the From / Reply To address as [email protected], which isn't being hosting by me but I have access to the email account. Can I more or less safely set this up with an SMTP server on a VPS? i.e. send messages using [email protected] as the visible address, but having it all go through my VPS SMTP? I cannot authenticate it, right? Is this too risky a practice? Is my only hope to use an address with a domain on the VPS, i.e. [email protected]? I already have a Reverse DNS record for the domain hosted on my current VPS. I also see other suggestions, like SenderID and DKIM. But with all these things combined, will this still work? I don't want to get blacklisted, but the good thing is this is a somewhat private list, and users opt-in to subscribe. So it's a self-made audience. (If it makes you feel better, this is related to a non-profit activity, not some marketing scam...it's for a good cause, I assure you!)

    Read the article

  • Disable internal display on Macbook Pro without closed lid mode?

    - by jslaker
    I have an early 2007 Macbook Pro running 10.5 that I've recently set up on a KVM with my primary desktop system. The problem I've run into is that I have a 20" 1680x1050 LCD, and OS X only provides options to mirror at the resolution of the built-in display or to span. Since the built-in display runs at 1440x900, this leads to running my LCD at non-native res and a fuzzy picture. There isn't any option that I can find to simply disable the built-in display entirely and run the external LCD at its native resolution. I am aware of closed lid mode, but the MBP was disassembled while in storage for about 6 months (took it apart to pull the HDD) and the cable to the touchpad, which controls the sleep sensor was damaged, meaning closed lid mode won't work. I've looked into replacing the cable, but the cheapest I've been able to find it is $75-100, and I'm trying not to invest any more money into this computer as it also has a completely dead battery and a few other minor problems. I've found the app SwitchResX which appears to allow you to do what I need, but it has a lot of functionality I don't need and a ~$20 registration charge attached to it. An odd set of circumstances, I'm aware, but I was hoping somebody might know of an OS hack that would let me just disable the internal display and be done with it. :)

    Read the article

  • rdp allow client reconnect without password prompt after several hours

    - by Tom
    Let me describe the setup first: client PC with several rdp sessions to local servers, all opened from saved rdp sessions with stored passwords, using the standard windows rdp client. several windows servers on the LAN, with varying server OS: windows server 2003, 2008, and even 2012 now. When I log onto my PC I open up rdp sessions to all those servers, and keep them open all the time for various reasons. Overnight the client PC is put into sleep or hibernate mode, thereby braking the rdp connections. On the next day when I wake the client PC and login again, the rdp sessions automatically try to reconnect to the servers, and this leads to the question: starting with server 2008 something apparently changed in the rdp server config, as all servers with 2008, 2008r2 and 2012 will prompt for the password in the rdp session, whereas the 2003 server rdp connections will re-establish without the password prompt. Apparently there is a timeout setting on 2008+ that, when exceeded, requires a reauthentication. Is there any way to setup the 2008+ servers to behave like 2003 did? I'd like the rdp sessions to reconnect without a password prompt even after a several hour disconnect.

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >