Search Results

Search found 12887 results on 516 pages for 'small jam'.

Page 379/516 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Logitech Performance MX Mouse Jumps on OS X Lion (10.7.4)

    - by Adam Thompson
    I have a Logitech MX Revolution wireless mouse that I am trying to use with OS X Lion. Everything is working except for one problem... there is a small, but quite noticeable, jump when the mouse cursor is moved. The problem is mostly prevalent when dragging and dropping files or trying to highlight items. It makes performing any task with the mouse accurately next to impossible. I did quite a bit of looking and found that all kinds of people have had mouse issues with OS X. I've tried all of the following with absolutely no success: Using the official drivers from Logitech (these performed worse than the default mouse drivers in OS X) Using SteerMouse as a third party mouse driver. This worked ever so slightly better than the default driver, but still suffered quite frequently from the skipping problem Cleaning the sensor on the mouse and ensuring it's not the result of the surface that it's being used on. Tested the mouse on a Windows machine. The mouse worked absolutely flawlessly on the other machine. Changed the channel that my wireless router operates on by the off chance my problems were the result of interference. This also had no effect. I can't think of anything else that could possibly interfere with the mouse. I'm am out of ideas on what to try, so I would really appreciate if anyone has any suggestions. I should also mention that an old wired mouse I had laying around worked just fine when I plugged it in. This really isn't the best solution, however, as I really prefer the MX Revolution.

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • When should NTPd broadcast/broadcastclient be used instead of client/server or peer modes?

    - by Luke404
    The NTP deamon if often used in its simplest mode, which is client/server: you specify one or more server directives in your ntp.conf and your clients will use those servers. In addition to that, when you run your own NTP servers, it is good practice to peer them together, so if one of them looses connectivity to its upstream servers, it will get time from its peers. But NTPd can also work with broadcast and/or multicast distribution of time data, with the documentation stating: broadcast and multicast modes are intended for configurations involving one or a few servers and a possibly very large client population The documentation also says elsewhere: It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance. I can see one obvious administrative benefit: you don't have to manually specify and update your list of NTP servers in the clients ntp.conf, so to me it looks tempting to use broadcast mode even for a small client population (say 5+ clients with 3~4 servers). I expect network traffic to be a little higher with broadcasts instead of client/server associations, but given the usual gigabit ethernet LAN the impact should be negligible unless you have a very very large number of hosts in the same broadcast domain. At the end of the day, when should broadcast mode be used or avoided? Are there pros and cons I haven't seen?

    Read the article

  • Strange problem with Google Mail and IMAP on Outlook 2007

    - by Alex C.
    I work for a small non-profit organization. We have about 35 administrative employees who use e-mail. We're on a Windows network with a domain. Everyone is running XP Pro and Office 2007 with all updates/patches. We used to use POP3 mail through a local provider. However, we recently signed-up for a free Google Apps account, and we switched to IMAP mail through Google. Everyone uses Outlook 2007 as the client. For about ten days, everything was working fine. Yesterday afternoon, we suddenly developed a strange and annoying problem. Every time you send an e-mail message, a copy of your outgoing message shows up in your inbox. It's as if you're adding your own address to the CC: line of every message. Nothing has changed on our end. I was hoping that the problem was a temporary glitch that would resolve itself, but here we are about 24 hours later, and it's still happening. I searched Twitter, and there were a handful of vague messages about issues with Google mail and IMAP, but I didn't see any references to this specific problem. Any thoughts on what's going on here and how to fix it?

    Read the article

  • My Mac OS X 10.5 netstat reveals a lot of open UDP connections.

    - by bboyreason
    here are my netstat results (besides server-less connections): Active Internet connections Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 192.168.1.98.49224 r1.ycpi.vip.sp2..http ESTABLISHED tcp4 0 0 192.168.1.98.49223 r1.ycpi.vip.sp2..http ESTABLISHED tcp4 0 0 192.168.1.98.49203 lax04s01-in-f189.https ESTABLISHED tcp4 0 0 192.168.1.98.49201 lax04s01-in-f19..https ESTABLISHED tcp4 0 0 192.168.1.98.49198 lax04s01-in-f19..http ESTABLISHED tcp4 0 0 192.168.1.98.49196 lax04s01-in-f19..https ESTABLISHED tcp4 0 0 192.168.1.98.49194 lax04s01-in-f19..https ESTABLISHED tcp4 0 0 192.168.1.98.49192 lax04s01-in-f19..https ESTABLISHED tcp4 0 0 192.168.1.98.49183 r1.ycpi.vip.sp2..http ESTABLISHED tcp4 0 37 192.168.1.98.49179 l1.login.vip.sp1.https CLOSING tcp4 0 0 192.168.1.98.49175 lax04s01-in-f104.https ESTABLISHED tcp4 0 37 192.168.1.98.49167 l1.login.vip.sp1.https LAST_ACK tcp4 0 0 192.168.1.98.49164 lax04s01-in-f19..https ESTABLISHED tcp4 0 0 192.168.1.98.49174 69.31.112.122.http TIME_WAIT tcp4 0 0 192.168.1.98.49173 69.31.113.83.http TIME_WAIT udp4 0 0 *.ipp **.* udp4 0 0 192.168.1.98.ntp **.* udp4 0 0 *.49628 **.* udp4 0 0 *.51997 **.* udp4 0 0 *.64675 **.* udp4 0 0 *.61947 **.* udp4 0 0 *.65152 **.* udp4 0 0 *.55643 **.* udp4 0 0 *.51704 **.* udp4 0 0 *.59757 **.* udp4 0 0 *.53643 **.* udp4 0 0 *.65346 **.* udp4 0 0 *.61960 **.* udp4 0 0 **.* **.* udp6 0 0 localhost.ntp **.* udp4 0 0 practivate.adobe.ntp **.* udp6 0 0 localhost.ntp **.* udp6 0 0 *.ntp **.* udp4 0 0 *.ntp **.* udp6 0 0 *.mdns **.* udp4 0 0 *.mdns **.** udp4 0 0 *.** **.** udp4 0 0 *.** **.** omitted a few asterisks, basically all the empty spots are asterisks what is up with all the UDP connections listening on any port? is that what this means? the internet activity that should be going in is that i connected via wpa to wifi at a small restaurant visited a few pages, checking mail from a few different accounts, no new mail or downloads where done. ?

    Read the article

  • Could hybrid SSD + HDD be made with fixed internal partitions?

    - by Aaron
    I was pretty close to getting Seagate's Momentus XT but have been scared off by the many problems reported on forums and feedback sites, especially in Mac Book Pros. So I'm waiting for mk 2 with some extra flash and better reliablilty I'm assuming will come out this year. What would suit me better though is a 32+500 hybrid drive where I have more control over what is on the flash drive and what is on the disk drive. So there are 2 physical partitions within the one 2.5" hard drive enclosure which use different media internally (32GB for core files and 500GB for data and multimedia). The partitions would be locked so they can't be changed. - Or even better, the disk driver just makes them appear as two disks to the OS that share the same bus... Perhaps it's ok if the bios just sees the first drive until the OS is loaded. Is either of it technically possible? Obviously difficult to market outside of the enthusiast market. The SSD memory modules can be pretty small right, so they could even make them a card that plugs into a secondary connection on the enclosure. That would be good for computer builders as well as for upgrading and recoverability. Then future operating systems could recognise these system SSD drives and automatically install the OS + swap files on it. While placing document libraries on the larger data drive. While in the longer term HDD will probably disapear there will always be a trade off between speed, storage size and expense.

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

  • Which project management software for technophobes who've never worked with something like that?

    - by Ernst
    Hi, Our director has asked me to get something to manage projects. Note that so far we haven't used anything of the sort. I did not get very clear instructions yet, probably because she doesn't know exactly what we need either. My guess is, we'll only find out while using something. I've looked at some, openworkbench, ganttproject, and microsoft project. The latter has the advantage of easy importing of users from exchange, are there others that do that (even if not directly, easily)? I don't think it's a critical requirement though. We're using some other custom software where I have to add users manually anyway and we're small enough that it's maybe once a month that I have to add or remove a user. In any case, I'm not in favour of buying anything, since I'm skeptic about us actually succeeding in putting it to good use, and even if we do, we will only during usage discover what we need. We're also not a tech shop, most people vary from not very technically adept to technophobic. This means we need something very userfriendly. I prefer to stay away from online solutions, since we deal with sensitive information and I prefer to keep that in house, but I guess it would be acceptable for the trial period. An intranet site is an option though, but preferably something that is easy to set up with IIS. Xplanner plus and redmine I found too hard to set up for this experiment. Some other options I haven't yet tried to install, but which look to complex for our technophobes: Endeavour Software Project Management, Project-Open, Trac. Any suggestions? Thanks, Ernst

    Read the article

  • What are the replacement options for an IDE hard disk for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hard disks have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hard disks are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • Formatting a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • LDAP Account Locked Out Sporadically after Password change - Finding the source of invalid attempts

    - by CityView
    On a small network of machines (<1000) we have a user whose account is being locked out after an indeterminate interval following a password change. We are having severe difficulties finding the source of the invalid logon attempts and I would appreciate it greatly if some of you could go through your thought process and the checks you would perform in order to fix the problem. All I know for sure is that the account is locked out several (5+) times a day, I can't even be sure it's due to failed login attempts as there is no record of failure until the account is locked. So far I have tried; Logging the account out of everything we can think of and back in with the new password Scanning the user's box for any non standard software which might perform an LDAP lookup Checking all installed services on our production boxes to check none are attempting to run under the account Changing the user back to their old password (Problem persists so perhaps password change is a red herring) Wireshark on a box where lots of LDAP authentication is performed - Rejects only occur after account is already locked out Clearing the credential cache in - Control Panel - User Accounts - Advanced Looking at the local I'm at a loss for what to try. I am happy to try any suggestions you have in order to diagnose the issue. I think my question boils down to a simple request; I need a technique for deriving the source (Application/Host) of the invalid login attempts which are causing the account to be locked. I'm not sure if that's even possible but I suspect there must be more I can try. Many thanks, CityView

    Read the article

  • Using Different Networks with Different Proxy Servers on Windows 7

    - by John
    Hi, I have a laptop running Windows 7 Professional. There are two wireless networks I connect to every day: Home: no proxy server Work: proxy server with authentication On my iPad and iPhone, I've got two WiFi network profiles (one for home, one for work). The work one has the proxy server settings specified. The home one has no proxy specified. It all works great and I don't need to go changing settings around whenever I move from home to work or vice versa. On my laptop, however, I can't seem to get this going. I can certainly connect to both networks, but when I'm at work I have to go and change the proxy settings (in Internet Options) to be able to use the network. When I'm at home, I have to then go and turn them off. It's a small thing, but considering this is something I have to do every day, it's a bit annoying. Is there any way I can make Windows automatically switch proxy settings on or off based on the network I'm connected to? Thanks, John

    Read the article

  • HTTP cache for my virtual machines

    - by MathematicalOrchid
    I have several Linux virtual machines running on my home PC. One of the quirks of Linux is that every time you run a package manager, it wants to "refresh" the configured software repositories - which basically means it wants to download a file from the Internet. If I revert to an earlier snapshot of the VM, then next time I run the package manager it will re-download the exact same data again [since it no longer exists in the VM]. It seems a shame to waste bandwidth endlessly downloading the same data over and over again, so I was wondering if there's some way I can set up some kind of HTTP proxy server that caches downloaded files. I have no idea how you would do such a thing though. In particular, it needs to be set up so that the VMs don't need to "know" that the cache is there; it needs to be transparent. But I don't know how to do that. Any suggestions on what software I'd need to use? It would be nice if I could run it under the Windows host OS, but running a small VM with a Linux guest is also possible...

    Read the article

  • SQL 2008 R2 replication error: The process could not connect to Distributor

    - by Lance Lefebure
    I have two servers running SQL 2008 R2 Standard, each with an instance named "MAIN". I have a small test database on my primary server (one table, 13 rows) that I want to replicate to a second server as a proof-of-concept for some larger databases that I want to replicate. I set up the primary server to be a publisher and distributor, and set the database to do transactional replication. I copied the data to the second server via a backup/restore, not via a snapshot (which I'll have to do with the larger databases due to database size and limited bandwidth). I followed the instructions here: http://gnawgnu.blogspot.com/2009/11/sql-2008-transactional-replication-and.html Now on the subscriber, I go under Replication / Local Subscriptions / Right click / Properties on my subscription to the DB. The status of the last synchronization shows a status of: "The process could not connect to Distributor 'PRIMARYSERVER\MAIN'." Data IS replicating from the primary to the secondary. Any record I add on the primary shows up on the secondary server within seconds. Is the Distributor part of the Snapshot system that I'm not using, or is it part of the transaction replication stuff? Thanks, Lance

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • How can I redirect/forward all the UDP/TCP traffic on one interface to another interface in OpenWrt

    - by Sina Sou
    I am new to networking and I have a measurement device (D) that periodically sends all its readings over few UDP multicast sockets (with different multicast IP addresses and different port numbers). That device even listens to a TCP socket simultaneously to modify its configuration on port 7234. Since the device has just a Ethernet interface for communication and I want to make it work wireless, I decided to use a very small wireless open-wrt based router that attaches to the device (D) and redirect/forward all the network traffic(Both UDP/TCP) to the router wireless interface. In order to simplify the problem assume that the Device (D) establishes following sockets (at the same time) UM_SOCK1: UDP mcast socket on 239.1.2.3 port# 50620 UM_SOCK2: UDP mcast socket on 239.1.2.4 port# 50640 TC_SOCK3: TCP DHCP/STATIC ip address 192.168.1.200 port 7234 And (D) is connected to Open-Wrt router (R) via interface en01 (Ethernet) the router has it own wireless interface on (wlan0) I want all the traffic from interface pass through wlan01 and vice versa (bi-directional) en01 <---- wlan01 What would be the minimum iptables or ... commands that I need to make this possible? Even I am wondering if traffic directing can be made easier like if the direction is not going to be based on IP addresses(not desired if the device is connected via DHCP) I would rather redirection to be Interface(en0) based or on MAC address (The best solution since my device has unique MAC address)? Thanks

    Read the article

  • System and Router configuration for setting up a home firewall based on Zentyal

    - by Ako
    I am not much of a system administrator, so please be patient if this looks too simple for you. I have a several computers at home, and all of them connect using an ADSL modem/router (and Wireless AP). I have been attacked several times (mainly from Russia and Ukraine), so I thought I should have some kind of firewall, besides the ESET firewall on my Windows 7. So now I have these (new) configuration: I have a small ADSL modem (Zyxel brand) which has only one Ethernet port. This modem is used to connect to internet and is configured in NAT mode. The interface has is configured with IP address 192.168.1.1. I have an old PC and I have installed zentyal on it. It has two Ethernet ports, eth0 and eth1. Eth0 is connected to the Zyxel modem with IP 192.168.1.2 and is checked as the WAN interface (external). I have another ADSL modem which is also a router with 4 Ethernet ports and Wireless AP. One of the Ethernet ports is connected to eth1 on Zentyal box. The Ethernet port's IP is 192.168.2.1 and Zentyal's eth1 is 192.168.2.2. Now, I want to enable other computers to connect to internet through the router both using Wireless and Ethernet. The problem is that I don't know how to configure the router so it routes connections to the Zentyal box. Does anyone have any clue? Again I am sorry if this looks stupid. Thanks.

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • Mac, VNC and multiple monitors

    - by MarqueIV
    I asked a similar question here before but apparently I wasn't as clear as I had expected by the responses. That said, I'll try again. I have a Mac Pro with quad monitors which I would like to access remotely. I've been using VNC for this (either via screen sharing or a dedicated VNC client), which works, but the VNC protocol matches the physical layout/resolutions of attached monitors. One of the things I like about Microsoft's Remote Desktop (Terminal Server) client is that when you connect, it blanks out the local screens and sets the resolution to a client-specified setting. In other words, when natively running Windows, even though I'm running a physical 30" monitor flanked by 2 24" monitors as well as a 21" Cintiq monitor, I can set the Remote Desktop resolution to match my notebook's screen giving me a native, single-monitor configuration. As soon as I disconnect (and you log back in locally), the desktop un-blanks and the resolution resets back to the four physically attached monitors. Again, VNC works and yes I know I can use 5901, 5902...n to attach VNC to a specific monitor as opposed to the entire desktop, but I'm still at the mercy of trying to look at a 2560x1600 resolution on a 1280x800 screen. I'm left with either scaling (everything's too small) or panning/scrolling (it's like playing hide-and-seek with your documents!) SO... anyone know of any Mac-based remote software (client and server) that will let me connect to my Mac Pro and reset the resolution by the client, just like you can in Windows, or am I SOL?

    Read the article

  • Telugu (unicode) font rendering in emacs

    - by Prakash K
    [I asked the following question in stackoverflow, and I have been redirected here. I hope I can get some answers here. My question at stackoverflow had two small images showing the example rendering of text. As a new user at superuser, I am not being allowed to include them here, nor I am allowed to post more than one hyperlink. And, I don't have enough reputation on SO to migrate that question. Please look at the stackoverflow question for the images. Sorry about the inconvenience.] I sometimes edit text in telugu language. However, when I open the file (UTF-8 encoded) in GNU emacs (version 23.1.50.1 on Ubuntu Jaunty) the text rendering is incorrect. The same text file opened in gedit is rendered correctly. Here's a snippet: ????????? ???? ???? ???????? rendred in gedit: Please see the SO question for the image showing telugu text rendering in gedit And, the emacs rendering of the same text: Please see the SO question for the image showing telugu text rendering in emacs Wherever glyphs need to be composited (not sure if it's the right word), emacs (or whatever library it uses) is not doing it right. Is there anyway to fix this? Perhaps tuning some setting in my configuration? Any ideas, please?

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >