Search Results

Search found 12330 results on 494 pages for 'old retired dude'.

Page 387/494 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • "Access is Denied" when executing application from Command Prompt

    - by xpda
    Today when I tried to run an old DOS utility from the XP Command prompt, I got the message "Access is Denied." Then I found that most of the DOS utilities would not run, even though I have "full control" over them. They worked just fine a few weeks ago, and I have not made any OS changes other than Windows Upgrades. Then I tried running edlin.exe and edit.com from the Windows\system32 folder. Same result - "Access is Denied." I tried running these applications from Windows Explorer and got the message "Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item." I am logged in as a member of Administrators and have full control over these files. I tried logging in as THE Administrator, with no change. I checked the security settings on the files, and have full control over all of them. I have tried copying the files to different drives, booting in safe mode, and running without antivirus and firewall, all with no change. Does anybody know what could cause this?

    Read the article

  • Replacement for public folder workflow, I'm confused as to how sharepoint does it.

    - by RodH257
    For years Microsoft has been slowly phasing out public folders, perhaps exchange 2010 really is the LAST TIME they'll be shipped... I've heard sharepoint is the replacement, but I don't understand full, can someone give me an idea of how to replace this workflow? In our office, we have projects, they have a project number, ie 10353. Each job folder has a public folder, organized in a hierachy like Projects Year Folder Subfolders The main subfolder we use is for genera correspondence. When an email is received that relates to a project, it is dragged and dropped (or right click move to) a public folder. Adding public folder favourites for each user helps this. When an email is sent, we have a custom email form, which is the default email form, but with a project number field next to the subject line. When you enter the job number in there, it carbon copies our filing system in, which reads the job number and puts the email in the public folder for you. if you need to refer to emails, you go to public folder and find them there. This isn't the best with large jobs, but it works ok. Now, I have limited experience with sharepoint (well, WSS), we've used it to do some neat discussion boards/polls etc as an intranet site, but I haven't seen much of its integration with outlook. The great thing about our solution is how tightly it integrates with outlook which is exactly where the emails are. If you want to forward an old email, you go to public folder and forward it, simple. Any solution that replaces it should be at least as easy as this. Improvements we would like would be to have better searching of emails, better support in exchange (ie future version) and also, custom forms in outlook are being phased out (the VBA kind), so avoiding these would be good. Does sharepoint do this? or what solutions do this kind of thing?

    Read the article

  • Tortoise SVN, find all non-added files?

    - by John Isaacks
    I am using Tortoise SVN on Windows. I have a deep directory structure (using cakephp). After creating several files I then went through and +added them. The new files show a ? next to them and a + next to them after set to add so it is easy to tell which files still need to be added. this is the same with folders. However if a new file is inside an old folder there will still be the green checkmark. You won't have any indicator telling you there is a new file. So I forgot to add one before a commit. This wouldn't be so bad as I could just add it and recommit. However, I also made this a tag, so I had recommit to the tag and the trunk. Anyways this wouldn't have been an issue if there was an indicator on existing folders that contain new files. Is there a way to set this up?

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • How to make a Windows Vista boot / recovery cd from a running system without using an original CD / DVD

    - by Giorgio
    I have just repaired a friends computer (replaced motherboard) and now I am trying to repair the Windows (Windows Vista) partitions. Unfortunately, probably due to the fact that he tried to start it several times after the old motherboard had stopped working (no signal on video) now the partition table or the file systems (or both?) appear to be damaged. I managed to boot Windows a couple of times but could not complete the boot. I tried to repair the partition table and file systems using Linux RIP (booting from USB stick) but the Linux utilities say that the file system is damaged and I should run chkdsk /f from Windows. So I now need a Windows boot CD from which I can boot and run chkdsk or any other Windows utilities that can repair the file system. Is there an easy way to create such a CD? Or can I download it for free somewhere? All the links to Windows Vista boot / repair CD's I have found on the internet refer to non-free stuff. Any hint? EDIT I have a working laptop with Windows Vista installed. So one solution would be to make a bootable CD or USB from it so that I can boot the desktop and run the repair utilities. However, I do not have the Windows Vista installation DVD, because both computers were bought with Windows pre-installed.

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • How to setup the Mac OS X Terminal so it's *just peachy*?

    - by kch
    Hi all, My Terminal is awesome, has every detail just right (for me anyway), and now I'm setting up a few new macs around here and I have no idea whatsoever how to get their terminals to a pretty state. My user account is rather old, has been migrated over many OS X releases and machines, so my Terminal setup has grown rather organically over the years. What I need is a recipe to start from scratch, so 1) I know what I've done, and 2) I can reproduce it anywhere. Things I'm looking for: Full UTF8 support. Setting LC_*, displaying characters correctly, accepting input… I hear this got much easier in 10.5, maybe it all works out of the box now? Setup of OS X-style keyboard text navigation (option-arrows, etc) How you particularly handle meta-key support? (other than ESC'ing your way around) Other things to help our n00bs get around in the shell, such as: List of useful default key bindings (^A, ^D, etc…) Mac-specific .profile, .inputrc goodness Mac-specific tools such as pbpaste & pbcopy, Open Terminal Here, etc If at all possible, a list of files to copy over to another machine that encompasses all the changes made to tune the Terminal. (dotrc files, plists, etc) And, well, anything else really. Just keep the scope on the Mac OS X Terminal application, rather than general unix setup and tools. I think a collection of incomplete answers would be a good start. Post one or two things you remember having done, we'll vote them up, and after a few days I'll try to compile it all into a summary answer.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • Problem running “Central Administration” website after windows update at Windows 2003 Server Standar

    - by Magdy Roshdy
    I was have WSS 2.0 and then I upgraded to WSS 3.0 and the old instalation database was SQL 2000, now I have another SQL Server instance called:server_name\MICROSOFT##SSEE . After upgrade every thing works fine and our team started to use the portal and we sent lot of documents and make lot of activities on it. The problem started after installing Windows updates the website suddenly stopped and giving me an error "Cannot connect to the configuration database" If I tried to open SharePoint Products and Technologies Configuration Wizard it is gives me a strange error says: "An exception of type Microsoft.SharePoint.PostSetupConfiguration.PostSetupConfigurationTaskException was thrown. Additional exception information: SharePoint Products and Technologies cannot be configured. The current installation mode does not support SKU to SKU upgrades because there exists an older version of Windows SharePoint Services that must be upgraded first " At this post:http://stackoverflow.com/questions/114398/iis-error-cannot-connect-to-the-configuration-database/249494#249494 the guy of the second answer have the same problem and he suggested a solution but I don't understand well. I tried as he suggested to make the identity of the app pool of the SharePoint web site as "IWAM_server_name " after that the error changed as he said and I web site give me "Server Application Unavailable " and when checked the Event Viewer at the server I found that ASP.NET 2.0 give this exception: "Could not load file or assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Access is denied ." and I don't know how to solve this problem. I'm really want to make my web site working because our team really need these documents and its stuff. I hope I will find some one to help me.

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • Can I autoregister my servers hostname in my local DNS? [on hold]

    - by Christian Wattengård
    We have evaluated a W2k12 server as a domain controller at work. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my "pop-up" dev-servers). We need to rework our work network for several odd reasons, and in this new scenario there was no money for an extra Windows 2012 license. We have at our disposal several old boxes that run linux quite well. Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubuntu flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment?

    Read the article

  • How do I get started with the M-Project is a Mobile HTML5 JavaScript Framework on Windows?

    - by Bruce Whealton
    This website for this great tool, call the M-Project says that I will need to add a doskey like this: doskey espresso=node C:\Path\To\Espresso\bin\espresso.js $1 $2 $3 $4 (It is a tool for creating Native mobile apps with the Phonegap/Cordova library, and it seems to be something that would be very helpful in this process). If I enter that at a command prompt in Windows 7 or 8, it's not going to stick around or persist. Is it an Environment Variable? Then it says at this page: http://www.the-m-project.org/ that it will work with Windows with some additional tools installed. The next line says that Node.js is needed, so I don't know if that is the additional tools mentioned above. Also, in an old discussion I read that one could just install cygwin. What would that do? It doesn't actually install any of the Linux distributions. I did install Ubuntu 12.04 server with VirtualBox because I thought it would be good to learn more about using Linux as I manage websites that are on a dedicated host. Anyway, the suggestion to install cygwin did not go into any details... I guess it would allow one to create a bash profile?? which would only work in a cygwin Command Line Window. Is that right? Isn't there a similar file that one could use in Windows or an Environment Variable that one could set to be able to achieve the same result? Thanks, Bruce

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • Change Outlook's default calendar to iCloud (meeting requests end up in wrong calendar)

    - by flohei
    Following scenario: I've got a main computer (Windows 7, Office 2010) which is being used to manage contacts, meetings, etc. using Outlook. Now I've added an iPad and an iPhone to sync using iCloud. I moved all appointments and contacts from the old PST file to the iCloud file. All the data syncs nicely. The email account I'm using in Outlook is an IMAP account which opens up another data file which brings us to a total of three data files in Outlook's side bar. The problem: When one of our clients sends us meeting requests via email they show up in the IMAP's inbox. When we open them up they automatically get added to Outlook's default calendar (the one in the original PST). Is there any chance not to add them to that calendar but the iCloud one? Basically we could completely get rid of the original PST since we don't use it at all anymore but the settings do not allow me to remove this PST file and set the iCloud one as default. Thanks!

    Read the article

  • DVD-burner disappears when using grub

    - by drblah
    This is a rather annoying problem I have with Linux on my laptop. Whenever I install any Linux distro that relies on grub (which is about 99 % apparently) it will mess up my DVD-drive and the drive will not be recognized after booting, no matter the OS (Ubunntu, Arch, Windows Vista or 7). I know the problem has nothing to do the DVD-drive itself, because I bought a new one a few months ago. I am also sure that grub is messing the drive up, because I have tried to use LILO, which worked fine (except for horrible boot times.) This problem have persisted through about 3-4 years and it is a pretty bad show stopper whenever I want to work with Linux. I had hoped it would get fixed over time, but no one seems to have the exact problem I have. The drive is connected with an IDE connection. Update: The old drive was a Toshiba Samsung SN-S082. I don't know the model number of the new one, but it is HP (I think). Things I have tried to fix it: Mess about with some BIOS settings like enabling AHCI and change some IDE settings. Installing a different boot loader DOES fix the problem but I would like to use grub.

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • On clients use generic driver for printer shared by CUPS

    - by Daniel
    I know this worked really easy with a recent CUPS version some years ago. Unfortunately I fail to do the following with latest CUPS nowadays. How to share a printer without using printer specific drivers on the clients with CUPS? The printer is a Samsung ML-2010. The important fact is that it needs a quite specific library for printing called splix. That is installed on the server and prints well. What makes trouble is using that printer over the network. I found out how to use Avahi under Linux to make use of DNSSD to advertise and discover printers. But as far as I understand the new CUPS offers the internal driver interface on the network. This has 2 major issues: anybody can fiddle with the driver and I don't trust any uncommon printer library to be "network secure" anyone who wants to use my printer including guests needs to install the specific drivers first I remember the old days when I could enable "Share this printer" on the CUPS server and all clients would magically detect the printer and just send their job data to the server and have it do the driver stuff. After everything a read I guess this is related to the changes Apple introduced with CUPS including dropping of the integrated network discovery protocol. If it helps: Server: Ubuntu LTS 12.04 Server with CUPS 1.5.3 Client: Arch Linux with current CUPS 1.6.1 On another box with Ubuntu the printer was setup automatically at least but the mechanism used the Splix library for that.

    Read the article

  • How to autorun wpa_supplicant on Debian startup

    - by The Electric Muffin
    I'd like to run wpa_supplicant -D wext -i wlan0 -c /etc/wpa_supplicant.conf on Debian startup (runlevels 2-5). I found some vague instructions from a related question that said to put a script in /etc/init.d/ and then symlink to it from the apropriate /etc/rcRUNLEVEL.d/ directories. However, I noticed that there are already some files named "wpasupplicant" that probably run at startup: /etc/network/if-down.d/wpasupplicant /etc/network/if-post-down.d/wpasupplicant /etc/network/if-pre-up.d/wpasupplicant /etc/network/if-up.d/wpasupplicant They all are symlinks to the same script, /etc/wpa_supplicant/ifupdown.sh. It has a comment at the beginning saying it "[...] allows ifup(8), and ifdown(8) to manage wpa_supplicant(8) and wpa_cli(8) processes running in daemon mode." However, the closest it gets to calling wpa_supplicant itself is (in functions.sh): WPA_SUP_BIN="/sbin/wpa_supplicant" [snip] start-stop-daemon --start --oknodo $DAEMON_VERBOSITY \ --name $WPA_SUP_PNAME --startas $WPA_SUP_BIN --pidfile $WPA_SUP_PIDFILE \ -- $WPA_SUP_OPTIONS $WPA_SUP_CONF [snip] start-stop-daemon --stop --oknodo $DAEMON_VERBOSITY \ --exec $WPA_SUP_BIN --pidfile $WPA_SUP_PIDFILE Does that mean it's safe to make an init.d script for wpa_supplicant, and if so what would it look like? General info: Debian Squeeze (5.0) official wpasupplicant package (v0.6.10-2.1) The full contents of my system's functions.sh and ifupdown.sh are here (dependent, of course, on my system's uptime—it's a five-year-old laptop that greatly enjoys overheating): functions.sh ifupdown.sh

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Having two IP Routes/Gateways of last Resort on an HP Switch

    - by SteadH
    We have an HP Layer 3 Switch that is doing IP routing between vlans. The general set up is that the switch has an IP address on each VLAN and IP routing is enabled. On our servers VLAN, we have a firewall that has a connection to the outside world. To set a IP route on the HP router, we use IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.1 where 192.168.2.1 is the address of our firewall, and the zeros essentially mean to route all traffic that the switch doesn't know what to do with out the firewall as a gateway. We're in the middle of an ISP and firewall change. I set up the new firewall and ran the IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.254 (the address of the new firewall). Things started working nicely. When I reviewed the configuration of the switch though, I noticed that it did not replace the previous ip route command, but just added another route. Now, I know how to remove the old firewall route (no ip route 0.0.0.0 0.0.0.0 192.168.2.1), but what is the effect of having these two 0.0.0.0 routes? Is it switch implosion? Will a server just respond back over the route it receives the request from? I've read elsewhere that having two default gateways is an impossibility by definition, but I'm curious about this situation that our switch allowed. Thanks!

    Read the article

  • Emacs 24.1: How do I restore i-search Ctrl-Y behavior from older versions?

    - by Eric
    In emacs 24.1, when you do Ctrl-Y in an interactive search, it yanks the kill buffer into the search string ("it pastes the clipboard contents" in any-other-app's language) and tries to match it. In the last 20 versions or so, pressing Ctrl-Y matches the rest of the current line. I have two very common use cases: Match this line, revert the buffer, and search for the line (less often:) Where else is this text in the buffer? I tried modifying /lisp/isearch.el, switching the bindings for isearch-yank-line (which I want) and isearch-yank-kill (which I'm fine binding to the ridiculous \M-s\C-e key sequence). But I don't think this file even gets picked up. But I don't think this file even gets loaded. If I explicitly load it, I still get the 24.1 behavior. Here's my change: (add-hook 'isearch-mode-hook (lambda () (define-key isearch-mode-map "\C-y" 'isearch-yank-line) (define-key isearch-mode-map "\M-s\C-e" 'isearch-yank-kill) )) No change in the behavior. I even tried hacking isearch.el, still no change. This is on Windows btw, but I suspect it doesn't matter. Could someone tell me how I can restore the old binding?

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >