Search Results

Search found 4079 results on 164 pages for 'parts'.

Page 109/164 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • UACCEEventLog 301 Filling Event Logs

    - by rjt
    After pushing out clients for the MS Application Compatibility Toolkit on our domain via GPO, UACCEEventLog 301 occurs a few times per second in the event log. Several Thousand per hour. One test i need to do is logon with Administrator to see if these events go away while Admin, but of course that is not a fix. This is only part of the event log entry, but is the most readable and clearly indicates yet another problem with Antivirus software. But still no fix. Originally, i posted this In Words and Bytes, but then edited it to make it much easier to read. LocalMachine\Users do have Read Access to this key. For a test, i added "Domain Users" but there are many more events for other parts of the registry and for Administrators. <XML> <TYPE> UacceRegistryVirtualization </TYPE> <EXENAME>smcgui.exe</EXENAME> <EXEPATH>c:\program files\symantec\symantec endpoint protection </EXEPATH> <APINAME>RegOpenKeyA</APINAME> <REGKEYNAME> HKEY_LOCAL_MACHINE\SOFTWARE \Symantec\Symantec Endpoint Protection\AV\Storages \SymHeurProcessProtection\RealTimeScan\0 </REGKEYNAME> <RESTRICTEDBYACL>FALSE</RESTRICTEDBYACL> <DESIREDACCESS>MAXIMUM_ALLOWED</DESIREDACCESS> <REGVALUENAME></REGVALUENAME> <REGVALUETYPE>0x00000000</REGVALUETYPE> <REGVALUEDATA></REGVALUEDATA> <CURRENTGROUP>Users</CURRENTGROUP> </XML>

    Read the article

  • Mod_rewrite to eliminate query strings

    - by Greg Frommer
    Hi everyone, I have been working on this for a while but I'm not finding exactly what I am looking for. I am writing a webapp to let my users create and publish pieces of HTML content in a domain and URL folder structure of their choosing. All of the content and requested URL structures are stored in a database. I have all of the code in my index.php (in the root folder) to access the database content, and based on the server name (and hopefully folder structure) will pick out the proper content from the DB and display it to the end-users browser. So my situation looks like this: www.test.com/index.php?id=123234345 ... will display the proper page, but I want my users to be able to define a unique "page name" instead of using the numeric index (also I want to hide the /index.php part) so what I would like the end-user to see is: www.test.com/arbitrary-unique-keyword/keyword2/keyword3 which will invoke the index.php page in the root folder. Then I will use the PHP $_SERVER['PATH_INFO'] variable to match the requested folder structure up with the proper content in my database and display that. All the material I have found so far expects me to hard code parts of the folder structure into the rules.... but I think I want something simpler (perhaps). So the question in a nutshell: How do I use mod_rewrite to allow all "non-existent" folder paths be passed through to a main index.php residing in the root folder? (For all paths that DO exist, like for calls to images... I want those to succeed and not be directed to the index.php obviously) Thanks everyone, please let me know if I can clear anything up.

    Read the article

  • VPN on a ubuntu server limited to certain ips

    - by Hultner
    I got an server running Ubuntu Server 9.10 and I need access to it and other parts of my network sometimes when not at home. There's two places I need to access the VPN from. One of the places to an static IP and the other got an dynamic but with DynDNS setup so I can always get the current IP if I want to. Now when it comes to servers people call me kinda paranoid but security is always my number one priority and I never like to allow access to the server outside the network therefor I have two things I have to have on this VPN. One it shouldn't be accessiable from any other IP then these 2 and two it has to use a very secure key so it will be virtually impossible to bruteforce even from the said IP´s. I have no experience what so ever in setting up VPNs, I have used SSH tunneling but never an actuall VPN. So what would be the best, most stable, safest and performance effiecent way to set this up on a Ubuntu Server? Is it possible or should I just set up some kind of SSH Tunnel instead? Thanks on beforehand for answers.

    Read the article

  • Cisco 3560+ipservices -- IGMP snooping issue with TTL=1

    - by Jander
    I've got a C3560 with Enhanced (IPSERVICES) image, routing multicast between its VLANs with no external multicast router. It's serving a test environment where developers may generate multicast traffic on arbitrary addresses. Everything is working fine except when someone sends out multicast traffic with TTL=1, in which case the multicast packet suppression fails and the traffic is broadcast to all members of the VLAN. It looks to me like because the TTL is 1, the multicast routing subsystem doesn't see the packets, so it doesn't create a mroute table entry. If I send out packets with TTL=2 briefly, then switch to TTL=1 packets, they are filtered correctly until the mroute entry expires. My question: is there some trick to getting the switch to filter the TTL=1 packets, or am I out of luck? Below are the relevant parts of the config, with a representative VLAN interface. I can provide more info as needed. #show run ... ip routing ip multicast-routing distributed no ip igmp snooping report-suppression ! interface Vlan44 ip address 172.23.44.1 255.255.255.0 no ip proxy-arp ip pim passive ... #show ip igmp snooping vlan 44 Global IGMP Snooping configuration: ------------------------------------------- IGMP snooping : Enabled IGMPv3 snooping (minimal) : Enabled Report suppression : Disabled TCN solicit query : Disabled TCN flood query count : 2 Robustness variable : 2 Last member query count : 2 Last member query interval : 1000 Vlan 44: -------- IGMP snooping : Enabled IGMPv2 immediate leave : Disabled Multicast router learning mode : pim-dvmrp CGMP interoperability mode : IGMP_ONLY Robustness variable : 2 Last member query count : 2 Last member query interval : 1000

    Read the article

  • Should tripwire be entering /proc?

    - by dsadinoff
    When initializing the db with tripwire --init it spat out a bunch of errors pertaining to /proc: ### Warning: File system error. ### Filename: /proc/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: Duplicate object encountered. ### /proc/sys/net/ipv6/neigh This feels like noise. The twpol.txt file has the following clause: # # Critical devices # ( rulename = "Devices & Kernel information", severity = $(SIG_HI), ) { /dev -> $(Device) ; /proc -> $(Device) ; } Which, if I understand it right, is going to cause tripwire to care deeply about the entire contents of /proc. Shouldn't it just care about the static parts of /proc like the drivers and such, and not the per-pid stuff? Why does it ship like this?

    Read the article

  • PowerShell 3.0 x64 bit broken after installing KB2506143

    - by Dave Parker
    I have searched using all kinds of variations on relevant terms and I cannot find a single other instance of someone else having this excact same problem, so I am hoping someone here may have a clue. Problem I installed Windows Management Framework 3.0 (KB2506143) by downloading and running Windows6.1-KB2506143-x64.msu from Microsoft.com. Once completed I rebooted my machine as requested. After rebooting and logging in, I try to run the 64-bit PowerShell command shell and it comes up for a second then goes away. The 32-bit shell seems to work fine, it is just the 64-bit one that fails. Looking in the Fusion logs, I found: *** Assembly Binder Log Entry (10/4/2012 @ 1:51:48 PM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorwks.dll Running under executable C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = ********\***** LOG: DisplayName = Microsoft.PowerShell.ConsoleHost, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL <remainder omitted> GacUtil reveals that there is a Microsoft.PowerShell.ConsoleHost, Version=1.0.0.0, but not 3.0.0.0. I tried uninstalling KB2506143 (which removed MSVCRT90.dll and caused Windows Live Messenger to fail on load after rebooting again, so I ran a repair in stall on Windows Live Essentials and that fixed the Messenger problem) and then re-installing it, but nothing changed. If it helps, here are what I think may be the relevant parts of my hardware/software environment. Environment Dell Latitude E6510, 8GB RAM Windows 7 Professional 64-bit with SP1 Visual Studio 2010 Professional installed (includes .NET 4.0) Visual Studio 2012 Professional installed Microsoft Forefront Client Security Any clues out there? Thanks, Dave

    Read the article

  • Will these instructions work when turning of journaling on an ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • Switched from DVI to HDMI, possible audio artifacts?

    - by I take Drukqs
    I'm using an ASUS VH236H monitor and an EVGA GeForce 570 GTX both of which are brand new. My monitor has an audio out port for speakers/headphones so I plugged in my headphones and made a random selection from my library when I noticed two things: There are static-like artifacts during "louder" parts of songs. There's what seems to be a volume cap in place. When I crank the volume past 100% in VLC the decibel level does not truly increase but the amount of static does. The cable is not new; I yanked it off of my PS3 when my DVI cable broke. It has been used a good amount on my HDTV and PS3 so I doubt it's a matter of burn-in. I like the way the setup works with an HDMI cable as opposed to DVI because my headphones barely reach my rig whereas I have plenty of slack when they're plugged into my monitor. Thanks in advance for any support. Note: I'm using a high quality HDMI cable from monoprice, AKG K702 headphones, and VLC media player.

    Read the article

  • RTNETLINK answers: File exists... maybe because assigned a new mac adress

    - by steven
    I got a "RTNETLINK answers: File exists Failed to bring up eth0:1" on "ifup eth0:1". I suspect it happens because i assigned a new mac adress in my VM's network adapter. Can you tell me how to fix the issue? My configuration looks like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.1.80 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 # Alias being connected to 192.168.10.x Network auto eth0:1 allow-hotplug eth0:1 iface eth0:1 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 Why do I get "RTNETLINK answer: File exists.." suddenly? I worked with this configuration before without problems. All i did in the past is to renew the adapters mac adress. At the moment I am connected to the 192.168.10.x Network and if I do /etc/init.d/networking stop /etc/init.d/networking start then i got "RTNETLINK [...] falied to bring up eth0:1" but the strage thing is that i am able to connect to 192.168.10.83 via ssh from my host machine. But I cannot reach the internet from the debian client. I hope it is clear what my problem is, now. update if i change my /etc/network/interfaces like this then "ifup eth0" fails, too with the same error! # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 with verbose option enabled i got: Configuring interfache eth0=eth0 (inet) run-parts --verbose /etc/network/if-pre-up.d ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0 RTNETLINK answers: File exists Failed to bring up eth0. same if i type this manually: ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0

    Read the article

  • Power supply switch like stays off motherboard light turns on

    - by Sion
    I bought a computer at the thrift store yesterday. The computer powered on without any error beeps. Getting it back to the house determined that the CD and hard drive needed to be changed. Put in a populated hard drive to check, the computer turned on and seemed to function. Put in a new CD drive, and just put in a new Hard drive. I plugged it in to check and I noticed that the light for the power supply switch did not come on. But I did notice that the light on the motherboard is lit. and I could not turn the computer on. To help troubleshoot it I unplugged the CD and Hard drive. then re-plugged the power supply and switched it on and off. Nothing changed. Parts: Motherboard: Digital Home PSW DH deluxe Power Supply: FSP-Group FX700-GLN Did I accidentally unplug something while installing the hard drive? Is the Power supply fried somehow?

    Read the article

  • Advice on Computer Specs for overall development/general use machine

    - by Ender
    At the moment I am restricted to a laptop with 512MB of RAM, a 120GB HDD and a 1.5GHz Intel processor for all my development and general browsing needs, and as you can probably tell using it for anything modern is a painful experience. As a result I've decided to buy myself a new desktop computer, one that will stand the test of time and one that can be upgraded easily. Rather than build the machine myself I've decided to go through Dell as I've had good experiences with them when purchasing computers for my family. I've had my eye on this as it's got a good amount of RAM, has a decent-rated processor and isn't priced too badly. http://www1.euro.dell.com/uk/en/home/Desktops/inspiron-580/pd.aspx?refid=inspiron-580&s=dhs&cs=ukepp1&~oid=uk~en~20211~inspiron-580_d005827~~ Intel® Core™ i5 Processor 750 (2.66GHz, 8MB) Genuine Windows® 7 Home Premium 64bit - English Display Not Included ATI Radeon™ HD 5450 1GB DDR3 graphics 6144MB Dual Channel DDR3 [3x2048] Memory 1TB (7200rpm) SATA Hard Drive DVD +/- RW Drive (read/write CD & DVD) with DVD Burn software 1 year of coverage included with your PC McAfee® Security Centre - 15 Month Protection - English After the pain of using a slow laptop for all this time the main thing I want is speed. I may look to play a couple of basic games on it, nothing too powerful. Obviously I'll be doing some development on it too so it'll have to be able to handle the latest IDE's and Database tools like SQL Server pretty quickly. Finally, should I ever need to improve it I'd like to be able to add more RAM and change some of the parts. I wouldn't have thought this would be a problem but a few people I've spoken to have said that the amount of RAM the motherboard can handle isn't that great. Is this true? How long can I expect to be using this computer before it's too slow? Thanks in advance for the help.

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Why would e-mail from our own domain not be forwarded to gmail

    - by netboffin
    To solve a problem with spam on our server we tried to forward e-mail from our dedicated server's mailserver(matrix smtp service) to gmail, but while most e-mails got through e-mail from our own domain all went missing. They weren't in the inbox or spam or anywhere else. We've had to go back to using the old system, which means my boss gets a huge amount of spam. We have a windows 2003 server with iis 6 and the matrix smtp service installed. I've toyed with the idea of installing a mail proxy like ASSP but it looks pretty complicated. We're hosting 20 domains on the server as well as our own which has an online shop whose payment system depends on email. I can't start playing around with complicated solutions when it could have disastrous consequences and I don't know enough to implement them safely. So my question has two parts: Part One: Why can't we forward e-mails from people using the same domain. If our domain was foobar.com then [email protected] can't receive from [email protected], but he can receive from everyone else. Part Two: Is there a really simple server side solution to spam that would work with matrix? For instance popfile?

    Read the article

  • How long will a "safely stored" Solid-State-Drive (SSD) keep its data? (e.g. bank safety-deposit box)

    - by user31575
    Here's my usecase: once-and-only-once copy off photos/videos to an internal SATA Solid State Drive (SSD) put this drive in a well-ventilated, air-conditioned bank "safety deposit box" for safe keeping The question: How long can I safely store a solid-state-drive in such an environment? i.e. 0% bitrot, 100% success when "plugged in" Are some SSD drives more reliable than other for this usecase? (e.g. smaller size vs larger size, SLC vs MLC, different brands, etc) More fodder: I have read that solid state memory cards (e..g compactflash, or sd cards) have much longer durability than other media (DVD's, CD's, hard drives) for this usecase (guaranteed against bitrot/other dysfunction on the order of ~ a decades vs a year ). I don't know if this applies to "SSD hard drives". Copying to one 500Gb ssd vs 8 64gb flash drives is easier SSD SATA hard drives have no moving parts, but they have more "visible electronics" than a compact flash card. I don't know if this "visible electronics" can fail, i.e. in contr I know many will point to carbonite, other cloud backup stuff, but I like the simplicity of having physical copies and wanted to understand the risks/implications thanks,

    Read the article

  • Notebook display problem (multiplication)

    - by SubniC
    Hi, I'm having some troubles with the display of a LG E500 notebook. The thing is that without any known reason the display starts to show the screen divided in eigth parts and each part show the display image as if I had a matrix of eigth displays :) I thought it could be some kind of refresh rate problem or driver related, but it is happening at boot-up as well and the BIOS. I got the computer completely unassambled yesterday and I check all the wires and connectors looking for something broken or unconected, but without luck... You can see a picture here of the problem (sorry for the low quality, but I think it illustrate the problem. EDIT 1: I uploaded a new pictrue, here you cans ee the problem better :) There are three horizontal lines that you can see just between the windows. You can see the grey line at the first moment you turn on the computer and the after the duplicated screens show up just like if they where arranged over a grid (over the horizontal lines...) I hope it makes any sense. Do you know what could be happening? or can you tell me what would you do? Thank you very much for your help.

    Read the article

  • Why does my microwave kill the Wi-Fi?

    - by Ohlin
    Every time I start the microwave in the kitchen, our home Wi-Fi stops working and all devices lose connection with our router! The kitchen and the Wi-Fi router are in opposite ends of the apartment but devices are being used a little here and there. We've been annoyed by the instability of the Wi-Fi for some time and it wasn't until recently we realized it was correlated to microwave usage. After some testing with having the microwave on and off we could narrow down the problem to only occurring when the router is in b/g/n mode and uses a set channel. If I change to b/g mode or set channel to auto then there is no problem any more...but still! The router is a Zyxel P-661HNU ("802.11n Wireless ADSL2+ 4-port Security Gateway" with latest firmware) and the microwave is made by Neff with an effect of 1000W (if this information might be useful to anyone). There is an "internet connection" light on the router and it doesn't go out when the interruption occurs so I think this is only an internal Wi-Fi issue. Now to my questions: What parts of the Wi-Fi can possibly be affected by the microwave usage? Frequency? Disturbances in the electrical system? How can setting Auto on channels make a difference? I thought the different channels were just some kind of separation system within the same frequency spectrum? Could this be a sign that the microwave is malfunctioning and slowly roasting us all at home? Is there any need to be worried? Since we were able to find router settings that cooperate well with our microwave's demand for attention, this question is mainly out of curiosity. But as most people out there...I just can't help the fact that I need to know how it's possible :-)

    Read the article

  • Blocking of certain file downloads

    - by Philip Fourie
    I have a problem where I cannot completely download a certain file from a server. The file is 1.9MB in size but only 68% is downloaded and then it hangs. I tried and these cases, which failed: Downloaded the file with HTTP Downloaded the file with FTP Moved the file to different FTP and web servers behind the ISA firewall Tried with IIS 6.0 & IIS 7.0 Multiple download clients. Which included FireFox, FileZilla (on Windows) and wget (on Linux) This worked: Downloading other files from the same location on the server. Both bigger and smaller and in size than the original. FTP and HTTP worked. Earlier version of this file (.DLL) works. It is as if the content of this file has an influence on this file being served. Network architecture: Client Machine - Internet (ISP) - ISA Server - IIS 7.0 The only constants are the ISP, Cisco router and the ISA server. Is it possible that something is rejecting the download because of the contents of the file? I am hoping ISA is the culprit... I am not a ISA expert is there somewhere I can look to establish if it is indeed ISA causing this? Update: Splitting the file into two parts with a hex editor results in one half of the file being served correctly and the other part not. Zipping the file results in the file being downloaded successfully. However this is not an option for this particular scenario. Renaming the file and its extension also doesn't work. Update 2009/10/22: It does NOT seems to be ISA that is causing this problem. We connected a laptop (running IIS) on an available public IP and still the file download to 68% before it hanged. The two remaining components are the ISP and the Cisco 800 series router. Anyone knows about an issue on the router perhaps?

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Flash Backed Write Cache (FBWC) without capacitor pack

    - by Martyn
    I brought a HP Smart Array P410 controller and it is installed and working fine in a HP Prolient Microserver with 4 drives in two RAID 1 arrays. I didn’t realise however that it came without any cache so would only work by directly writing straight to the disk, and the performance was horrible. So I then brought the 512MB Flash Backed Write Cache (FBWC) memory module as I was under the impression that with FBWC I would not need a battery. I got this idea from a forum post. "What do you guys think of the choice between 'BBWC' (battery backed write cache) and 'FBWC' (flash backed write cache)? The flashed based ones use non-volitile memory so need no battery." After installing the cache module however the server pretty much won’t boot. The P410 has a flashing amber light on it, and from the manual that doesn’t sound good. I’ve managed to get to the on board BIOS once and even managed to get to boot to the HP Array Configuration Utility (ACU) CD once, but every other time the Server continuingly reboots once it get to the POST screen and reads ARRAY INITILIZING %%%. The one time I reached the ACU, it reported a problem with the Cache Module. To me, it seems like the cache module is faulty, however the supplier tells me “Do you have an FBWC battery pack, p/n 587324-001, because that is required for the cache to work. If you have it, please complete an RMA form and we'll send a replacement / credit.” Does this sound right to you? I’ve been ordering the parts from the US and I don’t want to spend $77 + $40 p&p on a battery, wait a week for the shipping to find the card is faulty, and I don’t want to send back a working card?

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • WAMP Installation - Multiple PHP Version Issues

    - by Pete171
    I have installed WAMP because I am attempting to modify an an application which uses Zend Optimizer (I cannot use Z.O. with PHP5.3+, which is why I decided to install WAMP). I downloaded the latest version - Wamp Server 2.1 - which comes bundled with PHP5.3.5. I then downloaded a PHP version that would be compatible with Z.O. - 5.2.9. - and a compatible Apache version, 2.0.63. My problem: PHP scripts run fine, but anything with MySQL does not work. Running the testmysql.php script returns the fatal error: Fatal error: Call to undefined function mysql_connect() in C:\wamp\www\testmysql.php on line 2". I have looked in the PHP INI files inside both PHP versions, and I'm fairly sure the relevant information is there. At least, there are parts inside that mention MySQL! Perhaps somebody could clarify exactly what information should be present? Also, when visiting a page that called phpinfo(), I noticed that the 'Loaded Configuration File' was pointing to C:\wamp\bin\php\php5.3.5\php.ini, even though I have enabled the older PHP version. I've stopped and started Apache, too, and that hasn't made a difference. Is anybody able to offer any assistance? Anything at all would be great; I'm not very good at messing around with Apache/

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >