Search Results

Search found 89257 results on 3571 pages for 'need fix at userlevel'.

Page 225/3571 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Best motherboard power supply combo for backblaze server

    - by jin14
    Building a backblaze server as described in this article. http://blog.backblaze.com/ So 45 hard drives in one box. I'm making it a MSDPM 2010 server so I actually don't even need raid cards in there as MSDPM will figure out how to use all of the hard drives on it's own. So need to know what motherboard, CPU, power supply I should get. Primary hard drive : SSD 128GB Storage : 45 1.5GB sata drives OS : windows 2008 Backup software : Microsoft System center Data protection server 2010 Need to know Which mother board to buy which will support 45 SATA hard drives. Don't need a raid card. Which power supply can power all 45 hard drives, 1 ssd drive, motherboard. Best set of equipment that meets my needs wins

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • Disable dhcp client over one interface

    - by Lopoc
    Hi to all I'm encountering a problem on a sever with two ethernet interfaces(etho and eth1), it runs linux-ubuntu-server. I need eth1 not make any dhcp request, becouse I need it to be only a listening interface, obviusly I need eth0 running normally. So how can i disable any dhcpclient ation over eth1? thank in advance.

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". PLease suggest.

    Read the article

  • Copy files between two windows machines on seperate domains

    - by Simon
    I need to copy several database backups between two computers. The source computer initiates the copy and is a Windows 2000 pc and is a member of domain1. The destination machine is running Windows Server 2000 and is a member of domain2. The machines are on separate networks physically connected via a firewall. The files are currently copied via ssh with http://sshwindows.sourceforge.net/ installed on the destination machine. There is no need to encrypt the contents during the copy, however the passwords should not be sent in the clear. I am looking for a way to copy the files without having to install a server on the destination. I specifically need help with how to set up the permissions and what ports would need to be opened on the firewall.

    Read the article

  • Backing up data from server to laptop ?

    - by Patrick
    I need a tool to automatically backup my Drupal installations from my server to my laptop. In other terms, I need to copy 1 folder (all my Drupals are inside this folder) and all databases. So I was wondering if I just need to write a script on my laptop connecting to the server every week copying the folder with all mysql databases informing me by email if the backup has been succesfull do you know if I can find some tutorial for it ? or download such script ? thanks

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Exchange 2010 EMS - Total size of users mailboxes within a particular OU

    - by Moif Murphy
    I'm doing some massive DB cleanups at the moment. We have two DBs both approaching 400GB and I'm wanting to split the DB's into departments. To do that I need to know the total size of mailboxes within an OU. I've run this: http://stackoverflow.com/questions/9796101/exchange-listing-mailboxes-in-an-ou-with-their-mailbox-size but this only gives me a list and I need a combined totalitemsize so know how big I need the new DB's to be. Thanks

    Read the article

  • What's needed in a complete ASP.NET environment?

    - by Christian W
    We have a ASP3.0 application with a few ASP.NET (2.0) dittys mixed in. (Our longtime goal is to migrate everything to ASP.NET but that's not important for this issue) Our current test/deploy workflow is like this: 1 Use notepad++ or VS2008 to fix a bug/feature (depending on what I have open) 2 Open my virtual test-server 3 Copy the fixed file over, either with explorer, or if I can be bothered to open it, WinMerge 4 Test that the fix works 5 Close the virtual test-server 6 Connect to our host with VPN 7 Use WinMerge to update the files necessary 8 Pray to higher powers that the production environment is not so different that something bombs. To make things worse, only I have access to my "test-server". So I'm the only one testing it. I really want to make this a bit more robust, I even have a subversion setup running. But I always forget to commit changes... And I don't even work in my checked out folder, but a copy of what is currently in production... Can someone recommend some good reading on deploying, testing, staging and stuff like that. I currently use VS2008 and want to use subversion or GIT (or any other free VCS). Since I'm the only developer, teamsystem is not really an option (cost-related). I have found myself developing an "improved" feature, only to find a bug in the same feature in the production system. And since my "improved" feature incorporated deleting some old functionality, I have to fix bugs directly in production... That's not a fun feeling... (I have inherited this system recently... So it's not directly my fault that it is like this ;) )

    Read the article

  • What are the mandatory Linux kernel modules to run inside of ESXi

    - by Marcin
    I'm used to rolling my own kernels for servers, as it nicely minimizes the number of exploits (and the resulting patches) to take care of. In a traditional (bare metal) world, the whole process is about knowing what you have (hardware), and what you need (Ethernet, IPv4, iptables, etc.) In a virtualized environment, some things stay the same (still need Ethernet and IPv4), some things go away (power management), and then there are some new needs (vxnet3, or vmware-tools, even though that's compiled outside of the kernel). So my question mostly concerns itself with the last two categories: what can I remove completely, and what new stuff do I want? For example, what IO scheduler do I want, if all my disk operations are going through another filesystem/scheduler/cache to get to the virtual disk? Do I need hyper-threading enabled, or is the VM going to show them to me anyway as a CPU anyway? Do I need Large Receive Offload turned on, or is that something that the hypervisor's network drivers are going to do for me?

    Read the article

  • E-mail duplication problem

    - by Gavin Osborn
    I have taken out a hosting agreement with a well respected hosting provider for a couple of internet facing servers. We have deployed several applications to these servers which send various e-mails back to us for reporting purposes. Context: Each server runs Windows Server 2003 R2 with the IIS 6.0 SMTP service installed. Each application is configured to use the local instance of IIS to send e-mails. The external IP address of each server is mapped to a particular domain eg: server1.mydomain.com server2.mydomain.com These e-mails are sent from a company domain name and not the domain name of the hosted servers (eg: [email protected]) Symptoms: A small number (<1%) of e-mails sent from these applications appear to be duplicated. These are exact duplicate in terms of both content and message headers. The Fix: I contacted my hosting provider and they told me this was a common problem & instructed me to: Change the HELO response of your mail server service to a FQDN (server1.mydomain.com && server2.mydomain.com) Create a DNS A record that resolves the FQDN of your mail server to the primary IP address of your sending mail server. Create a PTR record that resolves your primary IP address back to your mail server's FQDN In the sending domain's (mycompanydomain.com) DNS zone file, add the appropriate SPF record for your hosted servers. eg: v=spf1 a mx include:mydomain -all The Problem Continues: I made all of the changes as prescribed above, I was a little hesitant because these steps seemed to suggest they were more for stopping your messages getting blocked than they were for stopping them from being duplicated - but I am certainly no expert in these matters. It has been 5 days since I applied this fix and the problem still persists. I am certain that these problems are not a bug in the software because they are 4 different applications installed on 2 different servers, all of whom are exhibiting this strange behaviour. This behaviour has also not been seen in our UAT environment. Were my hosts correct to suggest this fix? If not, does anyone know what could be the cause of this problem? Many Thanks

    Read the article

  • Is there a way to submit a batch of commands to a Cisco router and have them execute from the router?

    - by atroon
    I need to change the configuration of a remote (6 hours' drive) client's Cisco 871 (IOS 12.4.15T) from my location because of some new internet service at his location. To be more precise, I need to change the default route, ip address of the outside interface (Fa4) and disable the PPPoE setup there. Unfortunately, doing any of this will (obviously) break the connection to the router. I do not have an out-of-band management modem set up (I know, I know). Is there any way to enter the commands I need to have run and have them execute one after the other, from a file on flash:? I have never tried anything like that before. Essentially a DOS-style batch file is exactly what I need. Nothing like it seems to be out there except using kron to execute CLI commands, but that is specified here as only taking EXEC commands, not configuration ones. Is there hope, or do I travel?

    Read the article

  • Invoking an MMC Snap-in function from Windows command shell: is it possible?

    - by robob
    I need to execute a MMC Snap-in function from the Command Shell of a Windows computer. I need it to schedule this command in the same Windows PC and executes in background. Probably this questions could seem a little bit strange but I have a program that creates a debug log only through its MMC Snap-in console. And I need to automatise this task to programatically read this log! Dows anyone know how to do this? thanks

    Read the article

  • Stop sharing network shares without removing them?

    - by lance
    I have several shares on my Win7 machine. I need them to disappear from the network (and stay gone across reboots), without my having to actually remove the shares (because they need to re-appear in a week). Is there something I can disable/stop on my machine (I'm thinking a service?) that will get this done? There are no network shares I need to keep available to the network (during this time).

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • How to Setup Ubuntu Mail Server with Google Apps?

    - by Apreche
    I have a domain, let's call it foobar.com. All of the MX records for foobar.com point to Google's mail servers because I am using Google Apps for your domain to manage it. It's great because everyone gets all the advantages of GMail, but our e-mail addresses aren't @gmail.com. I also have a server. Primarily, it's a web server, but it also serves other things. One of the things it serves is the web site for foobar.com and also sites for various virtual hosts such as shop.foobar.com and forum.foobar.com. The server is running Ubuntu 8.04, because I like using LTS releases in production. The thing is, there are various applications running on the server that need the ability to send out emails. Various applications, like the cron jobs, send me e-mails in case of errors. Some of the web applications need to send e-mail to users when they forget their passwords, to confirm new registered users, etc. Lastly, it's nice to be able to send e-mail from the command line using the mail command, or mutt. How can I setup the mail on the web server to go through the Google apps mail servers? I don't need the web server to receive mail, though that would be cool. I do need it to be able to send mail as any legitimate address @foobar.com. That way the forum application can send mails with [email protected] in the from field, and the ecommerce application will have [email protected] in the from field. Also, by sending the mail through the Google servers, we can avoid a lot of the problems with the e-mails being blocked by various spam filters on the web. Google's SMTP servers are trusted a lot more than mine would be. I'm pretty good with administering Linux systems, but I am absolutely brain dead when it comes to e-mail. I need step by step directions from beginning to end on how to set this up. I need to know every thing to install, and every single change to the configuration files that is necessary. I have tried following various howtos and guides in the past, but none of them were quite right. Either they didn't work at all, or they offered a configuration that is not what I wanted. Please help. Thanks.

    Read the article

  • Installing Chameleon RC4

    - by user36912
    I have installed windows xp on C: and Hackintosh on F:. Currently windows xp is booting by default. If i need to boot into Hackintosh i need Empire EFI boot CD. I want to install Chameleon boot loader so that no more need of EFI boot CD. How can i install Chameleon? Will i install it from Mac or from win xp? What steps?

    Read the article

  • VLAN ACLs and when to go Layer 3

    - by wuckachucka
    I want to: a) segment several departments into VLANs with the hopes of restricting access between them completely (Sales never needs to talk to Support's workstations or printers and vice-versa) or b) certain IP addresses and TCP/UDP ports across VLANS -- i.e. permitting the Sales VLAN to access the CRM Web Server in the Server VLAN on port 443 only. Port-wise, I'll need a 48-port switch and another 24-port switch to go with the two existing 24-port Layer 2 switches (Linksys); I'm looking at going with D-Links or HP Procurves as Cisco is out of our price range. Question #1: From what I understand (and please correct me if I'm wrong), if the Servers (VLAN10) and Sales (VLAN20) are all on the same 48-port switch (or two stacked 24-port switches), afaik, the switch "knows" what VLANs and ports each device belongs to and will switch packets between them; I can also apply ACLs to restrict access between VLANs at this point. Is this correct? Question #2: Now lets say that Support (VLAN30) is on a different switch (one of the Linksys) switches. I'm assuming I'll need to trunk (tag) switch #2's VLANs across to switch #1, so switch #1 sees switch #2's VLAN30 (and vice-versa). Once Switch #1 can "see" VLAN30, I'm assuming I can then apply ACLs as stated in Question #1. Is this correct? Question #3: Once Switch #1 can see all the VLANs, can I achieve the seemingly "Layer 3" ACL filtering of restricting access to Server VLAN on only certain TCP/UDP ports and IP addresses (say, only permitting 3389 to the Terminal Server, 192.168.10.4/32). I say "seemingly" because some of the Layer 2 switches mention the ability to restrict ports and IP addresses through the ACLs; I (perhaps mistakenly) thought that in order to have Layer 3 ACLs (packet filtering), I'd need to have at least one Layer 3 switch acting as a core router. If my assumptions are incorrect, at which point do you need a Layer 3 switch for inter-VLAN routing vs. inter-VLAN switching? Is it generally only when you need that higher-level packet filtering ability between your departments?

    Read the article

  • Offline web font optimization tool

    - by avok00
    I have a few web fonts on my web site that I want to reduce in size. I tried http://www.fontsquirrel.com/fontface/generator with very good results, but I need an offline professional tool to rely on. Can somebody recommend such a tool? I am not a specialist font creator, so I need something like a wizard that can guide me through font optimization. Any suggestion is much appretiated! EDIT: To make myself more clear, I need a font subsetting tool

    Read the article

  • Sql-server-2008 client Access license

    - by thushya
    Hi, case 1 : i have one user makes 10 connection from single computer, maximum number of connection at a given time = 10, what is the number CAL i need here ? case 2 : i have 10 users have access to only 1 computer, 10 user connect from single computer - maximum connection at any given time = 1, what is the number CAL i need here ? case 3 : i have 10 users using 10 computers, all 10 are making total of 5 connection maximum in any given time, what is the number of CAL i need here ? Thanks.

    Read the article

  • Setting up dovecot on OpenBSD

    - by Jonas Byström
    I'm a *nix n00b that just installed dovecot (the selection with no ldap, mysql or pgsql) on OpenBSD 4.0 and I want to set it up for imap use, but I'm having a hard time finding documentation that I can understand. It currently running on port 143 (checked with telnet) but from there I need to do the following: I need some accounts, the once already on the system are fine if I can get those running (seemed to be some dovecot option somehow?), or just adding a few manually is ok too. Was there some setting for this in the default /etc/dovecot.conf? passdb bsdauth {} is uncommented by default... I need to create imap folders, or subfolders. How can I do that? Hopefully not, but anything else I need to do? I want to run without certification validation and no SSL/TLS, would this work by default (client-side settings)?

    Read the article

  • Replacing a Windows File Server

    - by Keltari
    We have a Windows file server that needs to be replaced. Unfortunately, there are to many custom build applications, shares, and random things that rely on the existence of that file server to just stand up a new one, so we need to make the transition as seamless as possible. We decided to copy the contents of the old server to the new and then rename the new one with the same name. I have made a list of everything I need to do. Am I missing anything? The new file server is up and running I copied the directories of the old file server to the new one I set up all the shares/permissions for those directories in advance Copy the contents from oldserver to newserver robocopy \vash\d$\nasshare\ \vash2\l$\originalvash\nasshare\ /E /ZB /copyall /dcopy:T /FP /x /v /fp /np /mt:8 /eta /log:robocopy.log /tee Rename and reIP oldserver to something else (need it available if something is missing) Rename and reIP newserver to oldserver Obviously, I need to test if shares are working properly. Are there any steps im missing?

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >