Search Results

Search found 14764 results on 591 pages for 'interview questions'.

Page 442/591 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • effective back-up using Raid / Win7 back-up

    - by Job
    I have a stand-alone pc system with two 2 tb harddiscs, one of which configured as Raid1, i.e. mirorring. The operational drive is partitioned. I use an external 1 tb harddisc for back-up using Windows 7 back-up facility which will be swapped weekly and stored on other premises. I back-up all partitions AND allow a system back-up. All application software is on the C: partition. Questions: How can I see whether Raid1 is working; i.e. is doing its job. All I see now is a status message in the start-up procedure that says its status is normal. How can I see used or available space on Raid 1? The Win-7 backup allows for 1 schedule only as far as I can see. I want daily back-ups of data. However due to the single schedule I am forced to do the time-consuming system back-up and c: back-up as well. Is there a way to activate two schedules allowing a frequent (daily) data back-up and a system back-up with c: drive back-up on a say weekly basis? Of course it can be forced by hand but I am likely to forget that. I am not the programming type of person so looking for simple and controllable solutions. Thank you - any help is apreciated.

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

  • VPS to replace MobileMe or Google Apps.

    - by Alex
    All, Yes, this has been touched on in other questions, but I can't find something similar enough. I currently have Google Apps hosting personal email, calendars, contacts, etc. I do like the other google services, but they're outside of Apps. One of the little google irritations that I have to maintain a separate account for Picassa, etc. So, I'm thinking about moving myself away from Google, but purely for personal, privacy type issues. Do I really like the ads, the email snooping, etc? I've had, and liked MobileMe, back when it was iTools, and then .Mac, but it doesn't offer that much really. How easily can I replicate it all on a VPS? I don't want to host it myself at home, I'd lose all the wonderful datacenter goodness. THis isn't about personal geekery in my own basement, just about taking a little control back from Google. So, email is fine running an IMAP server, a nice front-end, etc. What about Calendars and Contacts? And, how easily can it be setup to sync to the desktop and iPhone? Thanks.

    Read the article

  • User http does not have write permissions directory?

    - by dwieeb
    I have a bit of an odd set up, I think. I have groups for each domain my server hosts, and I add the user http to each domain group along with the users that should have access to the groups' domains. In my php script running from a directory 'public_html', I try creating a file: <?php $output = ""; print exec('touch test 2>&1', $output); But I get touch: cannot touch `test': Permission denied and the file is not created. But here, clearly stated, the group has all permissions on the directory: drwxrwxr-x 5 dwieeb example.com 1024 Feb 4 05:19 public_html And here are the permissions on the php file in public_html that is trying to use the exec function: -rw-rw-r-- 1 dwieeb example.com 59 Feb 4 05:19 test.php How is this possible if http is part of the example.com group (as seen from a cat on /etc/group) and the directory has full permissions for the group? ... example.com:x:1000:dwieeb,http I'm stumped. EDIT (since apparently I'm not cool enough to answer my own questions yet): Ah, I found the problem. Yes, I restarted Nginx, but the php-fpm daemon must be restarted as well when http is added to the group for my domain. On Arch Linux: rc.d restart php-fpm

    Read the article

  • /usr/bin/install hangs, apparently due to SELinux

    - by Cooper
    I'm trying to use the GNU coreutils install utility, however it is hanging: /usr/bin/install -v test_file test_dir/ `test_file' -> `test_dir/test_file I see the same behavior whether I run as a normal user, or root/sudo. I ran an strace -f, and this is the end of the output: ... read(4, "<username>\t-d\tsystem_u:object_r:ho"..., 4096) = 2197 <0.000012> brk(0x6e3b1000) = 0x6e3b1000 <0.000009> mmap(NULL, 29138944, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2abd831ae000 <0.000014> munmap(0x2abd815dd000, 29138944) = 0 <0.003466> The read() is reading from /etc/selinux/targeted/contexts/files/file_contexts.homedirs, apparently successfully. It appears that the process is hanging right after the munmap, but continues to eat 100% CPU. My two questions are: 1) Any good way to see what is going on with the process? I'm currently too lazy to compile a debug version of install I can run gdb on - but a strong suggestion in an answer here may motivate me to do so if needed. 2) Any idea what the SELinux issue could be? I'm not too familiar with SELinux. Additional info of possible relevance: # ls -Z drwxr-xr-x my_user 7001 user_u:object_r:user_home_t test_dir -rw-r--r-- my_user 7001 user_u:object_r:user_home_t test_file # id ... context=user_u:system_r:unconfined_t # uname -a Linux hostname 2.6.18-238.1.1.el5 #1 SMP Tue Jan 4 13:32:19 EST 2011 x86_64 x86_64 x86_64 GNU/Linux I am suspicious that SELinux + Quest Authentication Services (QAS) is causing the issue. QAS is generally well behaved, but it did cause the /etc/selinux/targeted/contexts/files/file_contexts.homedirs to get quite large (~18k users, @23 lines per user) Update: install -v -Z user_u:object_r:user_home_t file dir/ seems to work. Can anyone suggest why, given that SELinux is in permissive mode (see comments).

    Read the article

  • Windows Explorer Hangs on Right-Click

    - by Bryan
    I am not sure if this is the right site to post this one as I typically post coding questions on stackoverflow. But I'll ask anyways and hopefully someone can move it if it's incorrect. Currently I have a customer built PC, utilizing an Intel i7 chip, 1300WATT PSU, 8Gigs of RAM, and two video cards. Originally I had the one video card (NVIDIA) that used the PSU and had two DVI output. After purchasing a third monitor I installed another ATI) graphics card not needed any PSU connectors. After installing and restarting, I noticed that when I right-click on my desktop, or through Windows Explorer it will hang, freeze then restarted. Sometimes after Windows Explorer restarts the problem dissipates. I checked to make sure everything was connected properly and it was. I repaired the ATI Catalyst Control Center to see if that had an issue, and I checked to see if either video card required updated drivers. Nothing worked. I tried restarting my PC and that didn't work. I tried using ShellXView (I forgot what it's actually called) and tried closed processes but that didn't work. Does anyone have any idea what could have caused this orpossible solutions I should try?< Thanks in advance.

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • Memory Usage of SQL Server

    - by Ashish
    SQL Server instance on my server is using almost full memory available in my Physical Server. Say if i am having 8GB of RAM than SQL Server is using 7.8 GB of RAM from system. I also have read articles and also read many similar questions regarding same on this forum and i understand that memory is reserved and it is using memory. But i have 2 same servers and 2 SQL Servers, why this is happening on a single SQL Instance not on other. Also when i run DBCC MemoryStatus than it is showing up... VM Reserved 8282008 VM Committed 537936 so from this we know that SQL reserved whole 8GB memory, but why this VM Committed keeps increasing. What i understand is VM Committed is: VM Committed: This value shows the overall amount of VAS that SQL Server has committed. VAS that is committed has been associated with physical memory. So this is the memory SQL Server has committed (from this i understand that physical memory actually SQL Server is using at instance). So like to know the reason behind this ever increasing VM Committed memory on my server and not on another. Thanks in Advance.

    Read the article

  • Shuffling in windows media player

    - by Crazy Buddy
    I think media player has several issues indeed. You see, I'll be hearing songs most of the time using WMP 11 (in WinXP SP3). Today - While I was wasting my time poking some sleepy questions in SE, I also noticed this... My "Now-playing" list contains some 500 mp3s (doesn't matter). I've enabled both Shuffle and Repeat. I play those songs. When I get irritated with some song (say - the 10th song), I change it. Something mysterious happened (happens even now). A sequence of atleast 3 songs (already played before the 10th song) repeat again in the same way following the selected one... Then, I skip those somehow and arrive at another boring song (say now - 20th) and now, the sequence would've increased by about 5 songs (sometimes)... Sometimes, I even notice a specific "sequence of songs" (including the skipped one) repeating again & again. I doubt most guys would've noticed. This makes me ask a question - Why? There are a lot songs in my playlist. Why the same sets of songs? Does WMP really chooses a sequence at start and follows it. Once a change is encountered, it starts the sequence again after several songs. Is it so? Feel free to shoot it down. I don't know whether it's acceptable here. Just curious about it... Note: This is only observed when both shuffle and repeat are enabled. To confirm, I tried it in two other PCs of mine (thereby dumped 2 hours). BTW, I also didn't observe this magic in VLC, Winamp, K-Lite and not even my Nokia cellphone. I think I'm not a good Googler and so, I can't find any such issues :-)

    Read the article

  • How do I calculate the cost of printing a given page?

    - by Alenanno
    I have seen questions like How much does a square inch of ink cost and How much more will a high-dpi image cost to print?, but mine isn't asking neither about a specific case, nor about how much something costs, as that would depend on the toner, for example. Rather, I was wondering how should I go about calculating the cost of printing a given page. Note that "given page" should be seen as a sort of x, i.e. the answer should be applicable in any case; I'd like this question to provide a good reference for those who want to calculate this cost. What should be taken into consideration? The cost of a single page (the paper only) is easily checkable, since you divide the cost of the whole package for the number of pages in the package itself. But how do I calculate the cost of the ink/toner? Which could translate to: how do I calculate the Ink Density1 for a given printer? I know it depends on quality of the printer itself, the type, the quality of the image being printed, the very nature of what I'm going to print, etc. But again, the focus of my question is not on the variables of this case, but rather the constants, hoping the math simile works for this case too. 1: Total amount of ink in one area of the page.

    Read the article

  • Raid 5 with hot spare or RAID 10 with no hot spare?

    - by Boden
    Yes, this is on of those "do my job for me" questions, have some pity:) I'm at the limit for what I can do with the number of hard drives in a server without spending a substantial amount of money. I have four drives left to configure, and I can either set them up as a RAID 5 and dedicate a hot spare, or a RAID 10 with no hot spare. The size of each will be the same, and the RAID 5 will offer enough performance. I'm RAID 5 shy, but I also don't like the idea of running without a hot spare. I'm not so interested in degraded performance, but the amount of time the system is without adequate redundancy. The server and drives are under a 13x5 4 hour response contract (although I happen to know that the nearest service provider is at least 2-3 hours away by car in the winter). I should note that the server also has two RAID 1 arrays which would also be protected by the hot spare. Why don't they make drive cages with 9 bays! Heh.

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • Slackware - Assigning routes (IP address ranges) to one of many network adapters

    - by Dogbert
    I am using a Slackware 13.37 virtual machine within VirtualBox (current). I currently have a number of Ubuntu VMs on a single server, along with this Slackware VM. All VMs have been set up to use "Internal Network" mode, so they are all on a private LAN, and can see each other (ie: share files amongst themselves), but they remain private from the outside world. On on the these VMs (the Slackware one), I need to be able to grant it access to both this private network, and the internet at large. The first suggestion I found for handling this is to add another virtual network adapter to the VM, then set it to NAT. This results in the Slackware VM having the following network adapter setup: -NIC#1: Internal Network -NIC#2: NAT I want to set up the first network adapter (NIC#1) to handle all traffic on the following subnets: 10.10.0.0/255.255.0.0 192.168.1.0/255.255.255.0 And I want the second virtual network adapter (NIC#2) to handle everything else (ie: internet access). May I please have some assistance in setting this up on my Slackware VM? Additionally, I have searched for similar questions on SuperUser and Stackoverflow, but they all seem to pertain to my situation (ie: they all refer to OSX, or Ubuntu via the use of some UI-based tool). I'm trying to do this on Slack specifically via the command-line. Thanks!

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Windows 7 Automatically Connecting To Unsecured Wireless Networks On Startup

    - by Xtend
    Most of the questions on this topic related to folks connecting to somebody else's wireless network when their own was available and could remedy the situation by going to their connections and unchecking the "connect automatically" box. See this: " Avoid automatically connecting to wireless network on windows 7 " as an example. In my situation, I've noticed that Win 7 will automatically connect to any unsecured wifi network - even if I have never connected to it in the past. If I am traveling and boot Win 7, it will start and connect to what appears to be the best signaled unsecured network without prompting me for confirmation (note: in the above link, "Naveen" seems to have same problem). Obviously, that is a security concern to me. Further, when I open "Network and Sharing" and "Manage wireless networks" the network is not displayed (probably because I labelled it a public network). Again, these are new, never connected with before, wireless networks. I always promptly disconnect from them but don't want to have to be on constant guard for an auto connection to a malicious network. This began about a month ago, as I recall, Win 7 did not behave like this in the past, I didn't monkey with wifi settings, and don't use a 3rd party connection manager. I did have to download some internet security certificates for army website access but I don't think that should mess with network settings. Any ideas how I can tell Win7 cease automatically connecting to networks or, at least, to prompt me for a confirmation before connecting?

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >