Search Results

Search found 4290 results on 172 pages for 'y low'.

Page 124/172 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Format & Fresh Install Mac os x snow leopard in mac mini.

    - by sagar
    Hello Every one. I have purchased dvd of Snow leopard 10.6.2. But actually I purchased mac mini with 10.5.7 leopard I tried to install snow leopard 10.6.2. Everything went perfectly. system was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. ( means installation didn't format every thing - means installation was done on upgrade basis. ) Now, my system works with very low speed. Previous performance of mac mini was double as compare to current upgrade version. Now - my question are as follows. Does upgrade installation causes the performance in specially osx ? ( means anyone faced this kind of problem ? ) Or 10.6.2 snow leopard is heavy weight system for mac mini ? ( 2Ghz Intel core2duo,1GB RAM - is this configuration OK for snow leopard 10.6.2 ? ) Fresh install works better then upgrade in os x ?

    Read the article

  • Why can't I install Java?

    - by Patrick
    I've been searching high and low for someone who can help me. I've been trying for a month. A month. to install Java on my new PC to no avail. No tech support forum can seem to help me. It all started while playing tekkit one day. I kept running out of memory (using the 32-bit JRE 7 u45) so I decided to install the 64-bit version. I uninstalled the 32-bit version first, for some reason, and downloaded the 64 bit runtime. In the installer, I go through all the normal screens until the installation progress bar appears. Then, it just sits there. No progress is made. No CPU is used by the installer, or any of its dependencies. The installer will stay like this for hours, days, and in one case a whole week without doing anything at all. I've tried installing older versions, the 32-bit version, even Java 6 and none of them will install. UAC is disabled, I've run regedit, CCleaner, and any other "fix-it" program there is. It's getting to the point where I may just have to wipe my hard drives and start over. I have several applications that require java, so this is an absolute necessity. Please, please, someone have the answer. Here are my system specs: -Intel i7-3770k -AsRock z77 Extreme3 -Samsung 840 pro SSD -WD Caviar Black 1tb

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Java Swing over Remote Desktop - Strange, weird GUI squashing

    - by ADTC
    I thought this question fits SuperUser more than StackOverflow because it's not about actual Java programming, though programmers might be more likely to encounter the problem. Anyway, let me start of with some stats before I ask the actual question: Laptop: Windows 7 x32 Screen resolution 1024 x 768; Nvidia GeForce Go 6200 Connected to desktop via ad-hoc wireless network Access internet via desktop Desktop: Windows 7 x64 Screen resolution 1920 x 1080 Connected to laptop via ad-hoc wireless network Access internet via cable modem I'm connecting to my laptop via Remote Desktop from my desktop to take advantage of the large screen. I'm doing programming on my laptop (for portability reasons). Everything else runs smooth and fast over Remote Desktop as both computers are connected directly over the ad-hoc wireless. The only problem is this: Java Swing apps don't display the GUI properly. I acquired a Java Swing application and I'm debugging it in Eclipse. Here's what I got when I ran the app: Apparently there doesn't seem to be anything wrong with the GUI application I'm debugging, because the Java Control Panel exhibits the same problem. I've searched high and low in Google about this; the closest I came to a solution is this. But sadly, the use of -Dsun.java2d.nodraw=true has no effect at all. This only happens over Remote Desktop. I have tried locally and the GUI apps display properly. This isn't a dealbreaker for me as I can stop using Remote Desktop when developing Java Swing apps. However, I would like to know if anyone has encountered this and found any solution. PS: All software involved (Eclipse, Java JRE, etc.) are latest versions.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • SATA HDD stuttering on reads (LED on)

    - by jdehaan
    I hope I can get some new ideas to solve the issue. What I have: AMD Phenom II X4 905e Box, Sockel AM3 Gigabyte GA-MA790XTA-UD4, AMD 790X, AM3 ATX WD Caviar GreenPower 1,5TB SATA II 4GB Kit OCZ DDR3 PC3-10666 Platinum Low-Voltage CL7 HIS HD 4670 iSilence4 DDR3 1024MB Native HDMI Dual-DVI, PCI-Express Behavior: Everything works normally excepting the HDD accesses. The drive seems to get stuck in reading data (LED drive by Mainboard is on for a while) then recovers and goes on. There is a delay of more than 30s sometimes. Often it is a matter of 5-10s. This does not seem related to the workload of the disk. What I tried so far: Use different SATA port: I switched from the SATA3 to SATA2 then the SATA driven by the CPU, no difference. Turned AHCI off, drove the SATA in IDE mode, same behavior, now I felt like the HDD has problems Deactivated energy saving settings (BeQuiet, ...), disabled 3 CPUs (msconfig) supposing some race condition or trouble with changing freqs, same trouble With Windows7 Professional 64Bit, I installed the hotfix http://support.microsoft.com/?scid=kb%3Ben-us%3B976418&x=8&y=9 with no success. The problem description matches what I experienced, so I felt lucky but was quite in a sad mood after being deceived. Burned the DLG 5.04f DataLifeGuard CD from WD to check the drive. All tests passed, to my surprise! Installed an Ubuntu on the same drive i386 flavour. Drive still in IDE mode, exactly the same behavior. So the OS does not matter. What I did not try yet: http://www.gigabyte.eu/Support/Motherboard/BIOS_Model.aspx?ProductID=3263 I have a F2 BIOS and there is a F3a Beta, did someone have similar issues that were solved by this update? It is a beta so I fear upgrading a bit and I could find no release notes at all on the net, not really serious work maybe they had issues translating from Chinese??? Using another HDD drive Does someone have other hints or a similar issue? Maybe it is neither the HDD nor the Mainboard/BIOS?

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • How much does a computer actually cost?

    - by Cawas
    Ok, so when you buy a new notebook, you spend about $1 or $2 thousand with the OS included. When you make your own desktop machine you can get as low as $100 for a good one today, with not a single piece of software included. It can't be much lower, but it can go lot higher. People, including myself, tend to believe that's the price of a computer, but then there comes the softwares. I just stumbled upon a nice piece of application I could use myself, but it's very specific, very tiny, and most people would never bother about this. And it costs "just $12". That is a lot for something I may use just once or twice! OS upgrades, hardware malfunction, and your custom set of software actually raise the computer price quite a lot, thus this question: how much we pay in the end for our personal computers? I'd like to see some statistics on that. Maybe divided into 3 categories or something, but some data with averages, minimum and "maximum" costs would be very nice. Maybe a "cost per year" would be nice. Just wondering.

    Read the article

  • Format & Fresh Install Mac OS X Snow Leopard on Mac mini.

    - by sagar
    Hello Every one. I have purchased a DVD of Snow Leopard (Mac OS X 10.6.2) I purchased a Mac mini with Leopard (Mac OS X 10.5.7) I tried to install Mac OS X 10.6.2 Everything went perfectly. System was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. (means installation didn't format every thing - means installation was done on upgrade basis.) Now, my system works with very low speed. Previous performance of Mac mini was double as compare to current upgrade version. Now - my question are as follows. Does an upgrade installation causes the performance issues in Mac OS X? Or is Snow Leopard too demanding for the Mac mini? ( 2 Ghz Intel Core 2 Duo, 1GB RAM - is this configuration OK for Snow Leopard? ) Does a fresh install work better than an upgrade?

    Read the article

  • Can one really fry a monitor by setting the wrong HorizSync and VertRefresh?

    - by rumtscho
    I've encountered this problem on several different systems with several different monitors: a monitor functions perfectly under Windows. I install a Linux and the max resolution is at some impossibly low value, mostly 640x480, changing it in Xorg.conf doesn't work. The X.org log file then shows that the driver cannot determine the correct refresh rate for the monitor, so it ignores everything in Xorg.conf and just loads in some default minimalistic mode. Googling the problem leads to an easy solution: set the HorizSync and VertRefresh in Xorg.conf, and everything works. The problem seems to be a common one, and I've seen dozens of results recommending the solution. Each of them contains the warning that you should use the value ranges provided with the monitor. Because if you don't, and your video card sends a signal with the wrong refresh rate, this can damage your monitor. Of course, you don't have a user manual for your monitor any more. If you are lucky to find one on the attic or on the net, it doesn't contain any information about the supported refresh rate. So you just type in the value suggested in the solution description, which varies wildly depending on your source, and cross your fingers. You restart, and... ... you've set the wrong values. So the monitor shows a short message like "input signal out of range", and you do a hard restart, repair your Xorg.conf in recovery mode, and everything is fine, including your monitor. So does this warning reflect a real possibility, or is it just a geeky urban legend? Or is it something which used to happen in the past, before manufacturers started protecting the monitors against it? Is it technically possible with every monitor technology, or is it maybe something which can only happen to a CRT? If you think that it's true, why? Have you ever witnessed a monitor die from the wrong refresh config, or have you read of it in a reputable source?

    Read the article

  • User cannot access a system DSN on Windows Server 2008

    - by Ra Osolage
    We run our SQL Server services using a low privileged domain account. That account is NOT a local admin on the OS. Only access I give the user account is assigned during install of SQL: full control over its mount points and then everything else is granted by the SQL Server 2005/2008 installer. I need to create a linked server in SQL Server 2008 to an ODBC data source. So I remoted into the computer using my domain account, which is part of a group that DOES have local admin privs to the OS. I created a system DSN and configured it to connect to another SQL Server. The DSN works perfectly when I test it. However, when I try to create the linked server, I get an error. It appears to me that the DSN is invisible to the domain account that SQL Server is running as. It seems that this problem is only happening to me on Windows 2008 servers. Does anybody know whether there's anything that you need to do after creating a DSN to make it visible for other users to access?

    Read the article

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • Static noise in headphones

    - by John Murdoch
    I have a Asus P6T based system. I was using the on-board sound (plugging in Logitech X-230 2.1 analog speakers in the green "front speakers" 3.5mm analog output, then plugging in my headphones in that). I was quite happy with the sound quality (didn't hear any static noise if volume was turned down to my normal listening level). Then about a week ago I started having terrible static noise from the left channel, and no normal audio on that left channel. Right channel had more static noise than usual but did have a bit of sound. I tried using the AC'97 in front of my case but that seemed to have no signal. I decided my on-board sound card has gone bad and bought an internal sound card to replace it (Startech 7.1Ch PCI). This fixed the "no sound from left channel problem", but I had much more audible static noise. I decided the card was low quality and/or it had interference from all the other things happening inside the computer case, and bought a Sweex SC016 external USB sound card. But even with that I have static noise in headphones. Positioning the USB sound card differently doesn't seem to help. Trying the other analog outputs (e.g., surround) doesn't help. The static noise in all cases is proportional to the volume. I have tried different headphones, but the situation is situation though perhaps the flavour of the static noise changes slightly. So what are my options? a) Get another, more expensive, external USB sound card hoping the quality will improve? b) Get another, more expensive, internal sound card (PCIe 1x perhaps) hoping the quality will improve? c) Get a dedicated DAC box? d) Get some Hi-Fi earphones? Suggestions? tl;dr - Two different sound cards both still have static noise in headphones.

    Read the article

  • Apache2: How to split out the SSL configuration?

    - by Klaas van Schelven
    In Apache2, I'd like to separately define my SSL-related stuff once, and in a separate file from the rest of the configuration. This is mostly a matter of taste, but it also allows me to include the rest of the configuration in my automatic deployment process. I.e.: current situation: # in file: 0000-ourdomain.com.conf (number needs to be low) <VirtualHost xx.xx.xx.xx:443> # SSL part SSLEngine on SSLCertificateFile ....crt SSLCACertificateFile ...pem SSLCertificateChainFile ...intermediate.pem SSLCertificateKeyFile ....wildcard.ourdomain.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ServerName www.ourdomain.com ServerAlias ourdomain.com # the actual configuration, as found for xx.xx.xx.xx:80, repeated </VirtualHost> I'd like # in file: 0000-ssl-stuff <VirtualHost xx.xx.xx.xx:443> # SSL part SSLEngine on SSLCertificateFile ....crt SSLCACertificateFile ...pem SSLCertificateChainFile ...intermediate.pem SSLCertificateKeyFile ....wildcard.ourdomain.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ServerName www.ourdomain.com ServerAlias ourdomain.com </VirtualHost> # in file: ourdomain.com.conf <VirtualHost xx.xx.xx.xx:443> # the actual configuration, as found for xx.xx.xx.xx:80, repeated </VirtualHost> Unfortunately, this does not seem to work. Apache SSL fails, though it does not give an error message at reload or syntax-check. My best found workaround is to us an Include directive from the 0000-ssl file. Many thanks!

    Read the article

  • Server load high, CPU idle. NFS the cause?

    - by Mech Software
    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0 0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0 0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0 0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0 6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0 4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0 4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0 3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0 3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0 5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0 0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0 0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0 4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0 3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0 3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0 From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?

    Read the article

  • Running a VM off a USB 2.0 Flash Drive - Mac/Parallels/XP

    - by geerlingguy
    I use a MacBook Air as my primary machine, and the 128GB SSD means space is precious. To save about 10 GB, I've been running Parallels with a Windows XP VM off an external USB hard drive, which performs as well in everyday use as running the VM off the internal SSD. So, I bought a tiny 32GB USB 2.0 flash drive, plugged it into the MacBook Air, formatted it first as ExFAT (which was slow), then as Mac OS Extended (Journaled) (which was also slow), and copied over my VM file, and ran Parallels off it. My full experience is documented here: http://www.midwesternmac.com/blogs/jeff-geerling/running-windows-xp-vm Straight file copies are really fast — 30 MB/sec read (solid the whole time), and 10-11 MB/sec write (solid the whole time). But I noticed that once XP started running, the disk access rates were in the low KB ranges. Are USB flash drives really that poor at random access, or could I possibly be missing something (the format of the flash drive, etc.?)? Of note, I've tried the following, to no great effect: Formatting the drive as either ExFAT or Mac OS Extended (Journaled) Unplugging all other USB devices and turning off Bluetooth (which runs on the right-side-port USB bus). Plugging in the flash drive either direct in the right side port, or the left side port, or into a USB 2.0 hub

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    Hi, We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >