Search Results

Search found 1451 results on 59 pages for 'theory'.

Page 41/59 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Strange IIS hits originating from Trend Micro

    - by TesterTurnedDeveloper
    I'm trying to trace thru an error on a extranet site I maintain. I've had a look thru the logs, and I'm seeing hits originate from these IP addresses: 216.104.15.130 216.104.15.138 216.104.15.142 216.104.15.13 150.70.84.49 150.70.84.44 Network-tools.com gives 'TREND MICRO INCORPORATED' as the owner of all these IPs. The hits fail as they aren't sending any cookies (therefore aren't considered logged in). The hits are to pages containing URLs that only the logged in user would see, i.e. ImageEdit.aspx?ImageId=467424. I.e. the server isn't guessing these URLs, someone would have to log into the site to know these URLs exist. Theory: the Trend Antivirus client grabs URLs and sends them to the server for 'extra processing'? Googling around gives me this: http://www.forumpostersunion.com/showthread.php?p=51272 - where people are reporting comment spam from these addresses. The articles says their servers have been hacked (a few months ago, presumably fixed now?). A hacked server wouldn't explain how the URLs have been plucked off the user's PCs. Has anyone seen this before? Anything nefarious going on here?

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Default Browser hangs (IE, Chrome)

    - by Craig Hinrichs
    IE was my default browser about three months ago when I started experiencing this issue. Intermittent hangs would occur when I would open a new main page or new tab to a site I know would be up. What I mean by a hang: The browser would open and say "Waiting for site " and do nothing more. If I closed the window and reopened it it would immediatly connect. Over time I would have to close and reopen the window to get to the page. This would happen to any page including Google. I finally got sick of it and started using chrome and I will never go back. I recently upgraded my anti-virus and now I am experiencing the same issue with Chrome. I use AVG for my antivirus. Empirically it seems if I don't make Chrome my default browser I don't experience the issue. I tested this theory for over two hours yesterday. Possible issues I have found this coudl be but not confirmed yet: MTU settings are not correct. I am infected but my antivirus has not caught it (unlikely but possible) ?? I would like to think this is related to my antivirus but I am unsure how to verify. I don't like the idea of killing my antivirus if #2 is a possibility. I am looking for tips on how I can trouble shoot possible issues. I am on Windows XP SP3 Thanks in advance.

    Read the article

  • IE9 apprears to be ignoring RewriteRule in htaccess file

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • Make case-sensitive SMB share case-insensitive

    - by fungs
    I am running a legacy XP app that I would like to move on a network share. It is very simple and works in theory but the server providing the share is based on Linux (cannot configure) and the software does not work correctly because it is programmed case-insensitively, it seems. After some research, network shares behave like the filesystem they use underneath. This is normal. Unfortunately I cannot fix the software myself. Is there any way to turn the case-sensitivity into case-insensitivity for a Windows network drive on the client side? I fould two approaches: First, something like icasefile (http://wnd.katei.fi/icasefile/) that wraps around the program and intercepts the file I/O. This is for UNIX only. Secondly, a proxy virtual file system (e. g. something using Dokan). Unfortunately I couldn't find any suitable fs, the only possibility would be to put a case-insensitive filesystem on an image file and put this on the share using for example lmdisk (http://www.ltr-data.se/opencode.html/#ImDisk). Do you have any better ideas?

    Read the article

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Mouse receiver stopped working after pairing mouse to an unifying receiver

    - by mp19uy
    I bought this mouse, logitech m510, which came with a nano receiver (no-unifying). Yes I know, the mouse is supposed to come with a unifying receiver but I bought it knowing that it won't come on it's original package and it will come with a no-unifying receiver. When I received it, everything worked ok but, after connecting the mouse to another computer using an unifying receiver (that also worked ok, btw), then, when I tried to connect back the mouse to my computer using the no-unifying receiver, I couldn't connect it. I tried everything from removing the batteries and reinstalling the drivers to restarting the computer and trying in different computers, but I couldn't connect them. What I think it happened, is that in fact if you check the documentation of the logitech m510 it says that it works with unifying receivers only, and even more, there is the following article explaining it: http://logitech-en-amr.custhelp.com/app/answers/detail/a_id/18001/~/using-my-m510-with-a-different-usb-receiver So my theory is that the problem was connecting it to a unifying receiver, and now, isn't recognize by another receiver. The receiver (the no-unifying one) itself is recognized by windows, and if I connect the mouse using a unifying receiver, it works. I would like to know if there is any know solution for this or if I can try something else to see if I can solve this.

    Read the article

  • Why is my apache2, mod_fcgid, php configuration causing 100% cpu usage?

    - by Scott Lundgren
    Page load makes a quick initial connection, then hangs about 10 seconds before the page renders. When the server load goes up I start watching top & I see that both CPUs get pegged at times to 100% by between 4-8 processes of php-cgi. My theory is that since I never see RAM usage never go above 50%, that apache is able to handle the requests coming in, but is queueing them for PHP to process. What is wrong with my mod_fcgid/php configuration ? RHEL 5.4 2 Xeon E5420s @ 2.50 Ghz 4 Gb RAM Apache 2.2.3 Timeout 30 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 5 <IfModule worker.c> StartServers 2 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> mod_fcgid 2.2.10 LoadModule fcgid_module modules/mod_fcgid.so <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl php </IfModule> SocketPath run/mod_fcgid SharememPath run/mod_fcgid/fcgid_shm DefaultInitEnv PHPRC "/etc/" FCGIWrapper /usr/bin/php-cgi .php MaxRequestsPerProcess 1500 MaxProcessCount 20 IPCCommTimeout 240 IdleTimeout 240 APC 3.0.19 extension = apc.so apc.enabled=1 apc.shm_segments=1 apc.optimization=0 apc.shm_size=32 apc.ttl=7200 APC cache is 43% used with a 99% hit rate

    Read the article

  • IIS / Virtual Directory authentication.

    - by Chris L
    I have an IIS(v6)/Windows 2003/.Net 3.5(app code, libraries etc.) server hosting a website at www.mywebsite.com mapped to E:\Inetpub\wwwroot\mywebsite, we also have a virtual directory (VirtDir) mapped out to E:\Inetpub\wwwroot\mywebsite\files (although in theory this could be in a different directory or a separate machine) where we store a customer's files(a bunch of .pdf & .xls). Currently to access a file you can enter into the url something like: www.mywebsite.com/VirtDir/Customer/myFile.pdf and get access to the file. The problem is the user doesn't have to log into www.mywebsite.com to get access to the file, we would prefer them to log in first. We would like the user to login via the mywebsite and if valid, let them download files from the virtual directory. The www.mywebsite.com and VirtDir are separate sites on the same farm. Allow Anon Access, and Integrated Windows Authentication both enabled. I'm more of a developer and less of a Sys Admin, but hopefully I'm in the right spot, any help would be appreciated.

    Read the article

  • Is VGA port hot-pluggable?

    - by Martin Bøgelund
    In meetings, I often see people detaching the VGA connector from one running laptop and connecting it to another, while the projector is still on. Is this 100% risk free, and OK by design of the VGA standard? If there's a risk involved in hot-plugging VGA, can it be removed by turning off or suspending either laptop, display, or both? I see this being done all the time without causing disaster, so clearly I'm not interested in answers stating "we do it all the time, so it should be OK!". I want to know if there's a risk - real or in theory - that something breaks when doing this. EDIT: I did an internet search on the topic, and I never found a clear statement as to why it is safe or unsafe to hot swap VGA devices. The typical form is a forum question asking basically the same question as I did, and the following types of statements Yes it's hot swappable! I do it all the time! It involves some kind of risk, so don't do it! You're some kind of moron if you think there's a risk, so just do it! But no explanation as to why it safe or not... Joe Taylors answer below contains a link to a forum post and answers that basically give me the same statements as mentioned above. But again, no good explanation why. So I looked for an actual manual for a projector, and found "Lenovo C500 Projector User’s Guide". It states on page 3-1: Connecting devices Computers and video devices can be connected to the projector at the same time. Check the user’s manual of the connecting device to confirm that it has the appropriate output connector. [image] Attention: As a safety precaution, disconnect all power to the projector and devices before making connections. But again, no good explanation.

    Read the article

  • Apache runs in console but not as a service?

    - by danspants
    I have an apache 2.2 server running Django. We have a network drive T: which we need constant access to within our Django app. When running Apache as a service, we cannot access this drive, as far as any django code is concerned the drive does not exist. If I add... <Directory "t:/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> to the httpd.conf file the service no longer runs, but I can start apache as a console and it works fine, Django can find the network drive and all is well. Why is there a difference between the console and the service? Should there be a difference? I have the service using my own log on so in theory it should have the same access as I do. I'm keen to keep it running as a service as it's far less obtrusive when I'm working on the server (unless there's a way to hide the console?). Any help would be most appreciated.

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Outlook Signature Broken in Entourage

    - by Eric J.
    Some of our company uses Windows with Outlook 2010, and the rest use Mac with Entourage. When our standard signature line is included in an email that goes to Entourage, the result does not display correctly. It appears that Entourage is mangling the HTML. My working theory is that Entourage encounters inline CSS styles it does not know about and stops processing styles, but I'm really not sure. Question: How can I enter a signature into Outlook 2010 that will render correctly in Entourage? For example, can I specify somehow the exact HTML to use? Here's an example of how the HTML is being changed. Original on Outlook, as received by another Outlook client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#1785C5'>My Company<br> </span></b><span class=apple-style-span><span style='font-size:9.0pt; font-family:"Century Gothic","sans-serif";color:#666666'>123 Main St.</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#AFAFAF'>&nbsp;</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif";color:#666666'>Suite 100</span></span> Note the use of spans, color #1785C5 and color #666666. Same original email, as displayed in an Entourage client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; mso-fareast-font-family:"Times New Roman"'><br> <span style='color:#656565'>My Company<br> 123 Main St Suite 100<br> </span> Note the use of br tags rather than spans, and the color #656565.

    Read the article

  • WRT54GL Tomato Router in Client wireless mode to an iPhone Personal Hotspot

    - by Gordo
    I am trying to connect a router with Tomato firmware to an iPhone 4. The goal is to connect to the Personal Hotspot in Client Wireless mode. This should allow wired and wireless users to connect to the router rather then the iPhone. In theory this should be possible but I am having difficulty. Router Linksys WRT54GL Tomato 1.28.1816 firmware iPhone iPhone 4 iOS 5.1 (9B176) Carrier Rogers Wireless Personal Hotspot works with other devices, wifi/bluetooth/usb iPhone Personal Hotspot settings Mode: B/G Security: WPA or WPA2 Personal Encryption: AES Router IP: 172.20.10.1 Subnet: 172.20.10.0 Min IP: 172.20.10.2 Max IP: 172.20.10.14 maximum number of wireless tethered hosts is 5 I have followed the directions here: http://www.wi-fiplanet.com/tutorials/article.php/3810281 Ensured that the router subnet does not 'collide' with the iPhone subnet. Here is the configuration of the Tomato 'Basic - Network - Wireless' section: http://i.stack.imgur.com/pbmTB.png I have tried several variations of this configuration, but nothing seems to work. NOTE: I have successfully connected to my own wifi network in Wireless Client mode, so I am confident that there are no bad cables or other hardware issues. I would prefer to use Tomato, but DD-WRT maybe my only other option. Thanks!

    Read the article

  • PC power supply & normal range for voltages reported in BIOS hardware monitor?

    - by Chris W. Rea
    I'm trying to diagnose whether my computer has an ample power supply. Sometimes when I play a video-intensive game, both monitors lose the video signal, even though the computer remains on and sound playing. A theory I have is: the video card isn't getting sufficient power. I can't imagine it's overheating because the machine is well-ventilated and the video card isn't hot to touch when this happens. Anyway, in my PC's BIOS there's a Hardware Monitor page, and among other voltages reported (such as CPU, DRAM, South Bridge, etc.) I can see the following values: 3.3V 3.152V 5V 4.944V 12V 11.872V Are those the voltages used by peripherals? What voltage should I be referencing if I want to know what my video card (PCI Express) is consuming? What is the normal range of values reported for those? My values above appear to be under by approximately 4.5%, 1.1%, and 1.1% respectively. Is that cause for concern? How else should I be determining if my power supply is "right-sized" for my PC and video card, or am I perhaps barking up the wrong tree?

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • Broad Band LAN connection through existing 6 wire phone line ( no phone connected )

    - by Paul Taylor
    I have an (up to but never achieved ) 10 mb broadband signal coming into my house along the telephone line. The modem then connects the broadband signal via a LAN connection and Wi-Fi signal to my computers and iPad. My workshop desk is 54 yards (approx. 50m) downhill from the modem (part of a separate building) – too far to give a good direct signal. The inside corner of the workshop (by a window) receives a weak signal. I have a disconnected telephone cable consisting of 6 wires going from the house to the workshop. The telephone is no longer used in the workshop - we use our mobile number for business calls. A broad band signal using the cable would not have to share with a phone. What would be the most economic price/efficient way to get the broadband signal to the workshop? I am writing in hope that since the telephone didn't need the six wires, an Ethernet connection might be similar and not need all the eight wires for a LAN connection. In theory could use the telephone wire to pull an Ethernet cable trough the underground pipe but I doubt if the cable would survive the strain. The broad band connection in the workshop does not have to have network facility. My skill level is high enough to do house wiring, plumbing and gas fitting; so given any sound advice and a wiring diagram or instructions I can probably work out the rest myself.

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Relaying to tech "support" that computer is actually broken.

    - by Sion
    First some background: I have a Dell Inspiron 15R M050, it is still under the Dell limited warranty and the Best Buy Extended warranty. I am currently dual booting Debian Squeeze and Windows 7, the only reason I go into Windows is to play video games specifically steam games. Issue: When I play my games in Windows I am capable of playing for anywhere from 5 minutes to 2 hours before I suffer a hard-lock. I cannot alt-tab, ctrl-alt-delete, ctrl-shift-escape do anything for 2-3 minutes. After this hard-lock period everything runs fine, I can continue the game for probably another hour at least before I suffer another lock. Games: Borderlands, Splinter Cell: Chaos Theory, Starcraft 2, Garrys Mod What I have tried: Running the diagnostic suite in the dell bios, restoring the OEM Windows recovery partition on the HD, fresh installing Windows 7 Professional, updating BIOS, Calling tech support and having them run a software Hardware Diagnostics suite. The question: I think from the research that I have performed that it might be a lack of thermal paste on the CPU, would I be able to go to Best Buy and have them do a hardware diagnostic from the hardware level then have them be able to tell Dell that there is a hardware issue? Or would there be a different problem?

    Read the article

  • Links break in IE9 when using Wordpress plugins in non Wordpress Page

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • CentOS 6.3 Virtual under OpenVZ cannot ping, host lookups, outbound connections while postfix running

    - by Paul Cravey
    My best theory is that some kernel limit is being hit preventing outbound connections. We have tried basically everything from tcpdumps to provisioning an entirely new virtual server (we do not have this problem on any other virtuals), however the problem somehow carried over, even with new postfix build (working). Emails work, and outbound connections work, so long as postfix does not have too much going on. /proc/user_beancounters shows no limits being hit (show below). Nevertheless, pings fail even to IP addresses. TCP stack appears healthy. Load is low. No iowait. Flushed iptables already. Has anyone experienced anything like this? uid resource held maxheld barrier limit failcnt 3: kmemsize 166216365 170262528 9223372036854775807 9223372036854775807 0 lockedpages 0 0 9223372036854775807 9223372036854775807 0 privvmpages 285727 351885 9223372036854775807 9223372036854775807 0 shmpages 16933 17605 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 numproc 150 303 9223372036854775807 9223372036854775807 0 physpages 314156 326191 0 1280000 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 165355 165355 9223372036854775807 9223372036854775807 0 numtcpsock 89 172 9223372036854775807 9223372036854775807 0 numflock 22 76 9223372036854775807 9223372036854775807 0 numpty 1 2 9223372036854775807 9223372036854775807 0 numsiginfo 0 75 9223372036854775807 9223372036854775807 0 tcpsndbuf 2733472 4371752 9223372036854775807 9223372036854775807 0 tcprcvbuf 1798336 5427296 9223372036854775807 9223372036854775807 0 othersockbuf 491120 1000760 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 238728 9223372036854775807 9223372036854775807 0 numothersock 361 505 9223372036854775807 9223372036854775807 0 dcachesize 135941831 136114679 9223372036854775807 9223372036854775807 0 numfile 2905 4990 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 8 9 9223372036854775807 9223372036854775807 0 [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. --- 4.2.2.1 ping statistics --- 9 packets transmitted, 0 received, 100% packet loss, time 8493ms [root@bni /]# service postfix stop [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. 64 bytes from 4.2.2.1: icmp_seq=1 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=2 ttl=53 time=8.62 ms 64 bytes from 4.2.2.1: icmp_seq=3 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=4 ttl=53 time=8.66 ms Outbound connections of all sorts fail when postfix is running.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >