Search Results

Search found 22653 results on 907 pages for 'case insensitive'.

Page 662/907 | < Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >

  • Booting Windows from different partition than system

    - by szamil
    I have bought an SSD disk, but my laptop (Dell Precision M6300) refuse to use it as a target disk for windows (AHCPI on/off, BIOS up-to-date). I can't exchange the disk unfortunately... But fortunately, I've managed to install windows using USB disk case. The problem is, that when I put that disk as my internal drive it can't boot. (Disk read error, Three Finger Salute ... ) So I tried with Linux (openSUSE), I manage to install it as well, but when I tried to boot GRUB from internal drive I get errors again. (Should I try GRUB2?) I figured out that I can boot into that internal hard drive's openSUSE system using small USB drive with GRUB, kernel and image on it. So, I just run GRUB from USB drive, it loads necessary stuff from the USB drive and then continues from the internal drive. I want to do the same with Windows. But GRUB (rootnoverify and chainloader +1) does not boot my windows on internal drive. The question: is there any chance to copy the critical windows' boot files into the USB drive, to make it possible to boot from that USB drive, but continue booting from internal (different in general) drive? The USB drive would became a system hardware key! ;-) Disk: Plextor M5S 128GB Sata III, laptop has Sata II, but it's compatible anyway, right?

    Read the article

  • Domain joining debate for Outlook 2010 with Exchange 2007 on windows SBS 2008 for a user on a laptop that will travel a fair amount of the time.

    - by user71195
    I'm basically debating on whether or not to join the Domain on a Laptop, and was wondering if anyone has had a similar experience. If the computer were staying in the office, its a no brainer. Join the domain. In this case I have a user who will come into the office a few days a week, and work remotely the rest of the time. There is a working VPN using OpenVPN client/server, but it's not site-to-site. My knee jerk reaction is to not join the domain, so that the user can have 1 profile that they always use. In this configuration, should Outlook work properly with the user's domain account, and should the shared calendar still work (at least once inside the VPN)? My concern with joining the domain would be the inability to login to it when elsewhere. Is there maybe a way around this with caching or something? Would creating a second local login make sense for a user like this in any way? If so, why not just skip the domain join to begin with? Any thoughts on or experiences with this would be appreciated. Laptop OS Windows 7 (Not purchased yet.. pro if domain needed) Server SBS 2008, Exchange 2007 Outlook version 2010 Thanks for any help, Mike

    Read the article

  • How do I find the Serial Number of a USB Drive?

    - by jamuraa
    I'm trying to re-enable USB Autoplay in a secure way, by installing a program on each of the computers that I use so that I can run my launcher (PStart in this case) whenever I plug in my specific USB drive. The tool that I'm using to enable this - AutoRunGuard - needs the serial number of the USB drive that I am using. I can't figure out where to find this in Windows. Ideally I would not need to install and run a separate program to do this (seemingly) simple task. Since this is a pretty easy question, bonus points if you also tell me how to discover it in Linux as well. What steps do I need to take to retrieve a USB Drive's serial number? UPDATE: Just incase people come here looking for the answer for AutoRunGuard, I discovered that they don't want the USB device serial number, but the volume serial number. The drive serial can be found by going into the command line, navigating to the drive, and executing dir. The volume serial number is found in the top two lines - use it without the dash.

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • Hosting options for data-enabled web application

    - by Hertfordian
    I am independently developing an asp.net business application with a MySQL database. I currently have a Windows web hosting account which includes MySQL and MS SQL as installed supported options. I am not yet finally committed to using MySQL and I want to keep my options open to evaluate MS SQL and possibly other options such as PostGreSQL later when more of the business logic is in place - my data access layer will handle the database connectivity. The web hosting setup I have now is fine for development purposes, but if in future I want to use, say, PostGreSQL Server, and a level of usage of, say, 10,000 hits per day concentrated in business hours, I'm assuming I'll need a dedicated server. But in that case, should I just install PostGreSQL on the dedicated server, or is best practice to have a separate database server - perhaps locked down so that it can only be accessed through the web server? And supposing it was only 2000 hits a day - how would that change things? I'd appreciate it if anyone could point me in the direction of a useful guide to these sorts of issues. Naturally if I start paying for separate servers, I would like to know exactly why I'm doing it and what the performance issues and thresholds are.

    Read the article

  • HDD dead forever???

    - by Roberto
    Yesterday I turned on my computer and it couldn't boot. I found out the hd (320GB SATA Seagate Momentus 7200.3 for notebook) was broken, it couldn't be recognized by the BIOS. I have another of the same hd, so I exchanged the boards. I found out that there is a problem on its board since my good hd didn't work. But the broken hd doesn't work with the good board as well: it can be recognized but when I insert a Windows Instalation DVD it says the hd is 0GB. I put it in a case and use it in another computer via USB, and but it doesn't show up in the "My Computer". I used a software to recover files called "GetDataBack for NTFS", it recognized the hd but with the wrong size (2TB). I try to make it read the hd but it get an I/O error reading sector. It tries to read, the hd spins... So, since I'm using a good board on it, the problem seems to be internal. Is there anything someone could do recover the files from it?

    Read the article

  • Recovering a broken NTFS filesystem?

    - by OverTheRainbow
    A much-needed Windows Update broke a Vista laptop that was running fine until then: After booting up, Windows displays "Please wait..." but it never goes anywhere. I waited for a couple of hours, there is a bit of disk activity, but it didn't work out in the end. I booted with the Vista DVD, chose "Repair your computer" which said that there was nothing wrong :-/ Next, I booted it up with a Linux USB keydrive, and ran Gparted 0.8.1 (which includes ntfsresize v2011.4.12AR.4 libntfs-3g) which displays a bunch of warnings for the NTFS partition where the Vista system is located such as: ntfs_mst_post_read_fixup: magic: 0x00000000 size: 1024 usa_ofs: 0 usa_count: 65535: Invalid argument Record 16 has no FILE magic (0x0) Next, I ran ntfsfix /dev/sda2, which said: Mounting volume... OK Processing of $MFT and $MFTMirr completed successfully. NTFS volume version is 3.1. NTFS partition /dev/sda2 was processed successfully. Next, I rebooted Vista, which did a CHKDSK, before rebooting. But I'm still getting nowhere with "Please wait..." Before I copy the user's data to another host and reinstall Vista from a DVD, does someone know what I could try? Thank you. Edit: In case someone else has the same issue... After the BIOS, hit F8 and choose "Repair your computer", followed by "Toshiba HDD Recovery". In addition to a 1,5GB partition labelled "WinRE", the hard disk contains a second partition labeled "Data" from which the application will fetch a system image and reinstall it in the "Vista" partition. Make sure you copy your data out of the system partition before doing this.

    Read the article

  • How can I determine the IP addresses allocated by DHCP on a router that I'm connected to?

    - by user234831
    This "router" is not a typical situation. I'm using my phone as a hotspot and can only configure a select number of DHCP options. I can manage the limit on how many devices/clients can use my phone as a hotspot. I have to select from a radio-button list with the options: 2,3,4,5, or 8 I can specify the DHCP starting IP address. In this case, it begins at 192.168.6.106 When I'm connected via WIFI to my phone, an ipconfig /all command shows me that the default gateway is 192.168.6.1 and my IPv4 address is 192.168.1.148. I have the luxury of connecting another device to the phone and that device was assigned 192.168.1.121. I've tried connecting to 192.168.6.1, hoping for some sort of router setup page that I'm used to seeing, but there is no such thing or maybe it's just a matter of incompatable operating systems. In summary, the "router" (phone) has an IP address of 192.168.6.1 and a DHCP server that begins at 192.168.6.106 and allows up to 8 connections. Normally, I would assume a range of 192.168.6.106 - 192.168.6.113, but connected clients are showing otherwise. How can I figure out which IP addresses are set aside by DHCP for clients?

    Read the article

  • Create DFS replica from a NAS drive

    - by Mark
    We have two offices, at two different locations. In one we have a NAS, with some shares. We also have a Domain Controller using Windows 2003 R2. We have setup a second Domain Controller using Windows 2003 R2 to put that in the second office. What we would also like is to replicate the NAS drive onto the second Domain Controller so in the second office they have a local copy, and that their changes are replicated back to the NAS. Is there a way to setup DFS replication to do this? Or will it only work with local folders on each Server? Update 1 Sept Base on the answer below, I think I need to add some clarification. The real issue is that the NAS which hosts the shared folder that we want to replicate is external to both servers. And we have a particular share mapped to say S: . In the replication setup it doesnt seem to accept network shares external to the server to be candidates for replication. I can understand why, I just need confirmation that DFSR will only work with block devices that are local on at least one server. Is this the case?

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • GPG - why am I encrypting with subkey instead of primary key?

    - by khedron
    When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me?

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • how do i completely delete ask.com from my computer?

    - by celyn
    I have used Final Uninstaller (unregistered version) to remove it. So it removed the toolbar and the things in its folder from C:Program Files/Ask.com except for one thing; remaining are "Ask.com" folder > "Updater" folder > "Updater.exe" I have not checked my registry yet. But if there is something I want it to be gone! As to why I can't delete that updater thing, my laptop asks me permission (says need to be admin) whenever I tried to delete anything from ask.com folder, or its folder at all. I have googled, came to and followed the instructions from "Scott McClenning" in this post. Does not really work. When I say "not really", means, this error message pops up everytime I tried to do that: An error occurred applying attributes to the file: C:/Program Files/Ask.com Access is denied. How can I gain access? I AM the admin for this computer. And... don't ask me to download too many things for my computer, it adds to my frustration. Just in case you are wondering, I got this from FormatFactory when I updated it to 2.70. I should not have done so. Update: Now after I restarted my computer, I got the "EVERYONE" group in and it is under Full Control with every box ticked except for the last one (Special). When I tried to delete that folder and the .exe file, this keeps popping up as i click "try again", only goes away when I click "cancel"

    Read the article

  • Microsoft Word 2008 on the Mac sometimes "Disappears" documents, really.

    - by Ross Charette
    This happens in a computer lab environment, has happened at least 3 times. We are running Microsoft Office 2008 for mac on Leopard, everything is updated. Our user's home directories are on a network drive, but the /Library/Cache folder is running locally. Typically a student will have a Word file that they have been working on, it's been saved before they even logged onto the computer that day. They log on, open the document, click the save icon (not go to File Save), sometimes even save multiple times, then close Word. The document is now gone. It's not hidden, there are no autosaves or anything in the Cache folder. Definitely not in the trash or trashes folder. It can't find it when you click on it in 'recent documents'. Searching meticulously though every folder in their home drive turns up nothing. They look using Finder, I look ssh'd as root into their home using ls -la. I look for similar files in case they renamed it by mistake. It's gone. Disappeared. Vaporized. It's happened to at least 3 different users in the past year. Much whining. Any idea?

    Read the article

  • MSSQL 2008 login failed for windows authentication

    - by Force Flow
    I'm running Microsoft SQL 2008 on a Windows 2008 Server. The MSSQL server server authentication is set to SQL Server and Windows Authentication mode. I have created an active directory security group "xyz app users". I have added a normal user (without any active directory admin privledges) and a user with domain admin privledges to the "xyz app users" group. I have added the group to the MSSQL management console as a login user. This group is a member of the public server role and is mapped to two databases. On a workstation, when the normal user is logged in, I configure a DSN ODBC connection, and I'm able to successfully create the DSN and test the SQL connection. However, when I'm logged in as the user with domain admin privledges, when I attempt to configure the DSN ODBC connection, I can't get past the login ID configuration screen. If I select "windows authentication" and click "next", I get an error: Connection failed: SQLState: '28000' SQL Server Error: 18456 [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'mydomain\myuser' On the server's application event logs, this error appears: Login failed for user 'mydomain\myuser'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 172.x.x.x] And in MSSQL's event logs: Error: 18456, Severity: 14, State: 11 Solutions that I've seen so far do not seem to fit this situation (some solutions I've seen are only applicable when the BUILDIN\Administrator is being used locally on the server, which is not the case here).

    Read the article

  • How many bootable partitions are possible to have on one hard drive?

    - by draiden
    This may not be the correct place to post this; if that's the case, just let me know and point me in the right direction please! I'm thinking of building a box that needs to be lightweight and portable, and would need to be able to boot multiple installations of windows. I am needing to have multiple installations so that I can, for example, plug the box in to the network at one location, boot in to that location's partition, and have full access to everything I would normally need to do on a computer that has already been set up on that network. Then, when I go to the next client, I would be able to do the same thing, with the new location's partition, and have all of those network settings, drive mappings, etc., available there. Obviously I'd need to go through and set them all up on the different locations/networks, I'm not expecting it to magically know where I am and what I'm doing. It would be like I'm carrying around a computer that is configured for each place I need to go in one little box, instead of having to have multiple computers or having to reconfigure all the settings and such every time I go to another client. Or is there an easier way to do this that I haven't learned of?

    Read the article

  • Asterisk relay between multiple subnets

    - by immoune
    I wonder what's the best way to go when you have phones on multiple networks which are not directly reachable. I have 3 networks 10.3.x.x 10.6.x.x 10.17.x.x My asterisk server resides on the 10.3.0.5 IP. The machines from the 10.6 and 10.17 networks are routed here through VPN tunnels. At this point we don't talk about NAT anywhere on the network just pure routing. Since the 10.3.0.5 PBX has routes back to all the subnet's it has no problem to communicate with softphones/hardphones from these ranges. The problem comes from that Asterisk (as far as I understand) only responsible for the SIP communication part not the Audio/Video transmission which is in P2P fashion done between the devices. So although a client using sipdroid from 10.6.x.x is able to connect to the pbx (10.3.0.5) and dial a bria client on the 10.17.x.x network once the phone rings out and the call establishes no audio will be transmitted simply because it has no way to directly connect there. For this there are multiple solutions described in this text: http://msdn.microsoft.com/en-us/library/ee480411%28v=winembedded.60%29.aspx What I would prefer is to keep these networks segregated as they are now. What would be the best solution? Is it possible to actually relay through all the audio/video information through the Asterisk server? That would be the best in my case, I using Astlinux there which has a lot of other parts. Thanks

    Read the article

  • mod_wsgi -apache configuration file

    - by Kevin
    guys sorry I'm a newbie to this but I've been following the mod_wsgi configuration tutorial and it's very spotty. In my httpd.conf file I add the virtual host like so: 'Main' server configuration # The directives in this section set up the values used by the 'main' server, which responds to any requests that aren't handled by a definition. These values also provide defaults for any containers you may define later in the file. # All of these directives may appear inside containers, in which case these default settings will be overridden for the virtual host being defined. # ServerName wsgihost DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /myapp /Users/KL/modwsgi/env/myapp.wsgi <Directory "/Users/KL/modwsgi/env"> <Files myapp.wsgi> Order allow,deny Allow from all </Files> </Directory> Now, when I also added in my local host the following: 127.0.1.1 wsgihost but I can't seem to connect. Am I doing something terribly wrong?

    Read the article

  • Cannot install SQL Server CE 4

    - by Manos Dilaverakis
    I'm trying to install SQL Server CE 4 on a WinXP Pro SP3 machine. I double-click on the file and absolutely nothing happens. There is nothing in the event viewer and the only effect I can see is the addition of an empty, randomly named folder in C:\ which looks something like C:\7c59aaeb5e43f6bdcb2430e923 I've tried this with both SQL Server CE 4 and the SP1 version. I've tried disabling the AV (Nod32) file protection but it didn't make a difference. I've checked the installed program list in case it's already installed, but I don't see it anywhere. I checked in C:\Program Files\Microsoft SQL Server Compact Edition\ and there's only the \3.5 folder in there from the already installed 3.5 version. Does anyone know what's going on or how I can further diagnose the problem? Edit in response to Ramhound: I have .NET 4 installed. Why, does it need a particular version? Edit in response to leinad13 I tried Process Explorer and filtered by the name of the temporary folder created. I see the following, but can't make much sense of it.

    Read the article

  • upload process times out for different time zones

    - by shilezi
    I have an inhouse(NY) app that cients can upload files to. Usually uploads go pretty quickly for most clients but this particularly cient in the UK always have problems with uploads. not sure if they get any errors but since we log all exceptions, we see this...Error: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.Net.WebException: The operation has timed out at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at ... This shouldn't even be an issue because since we noticed they have had this issue before so we bumped up these to maxRequestLength="2097151" executionTimeout="14400". Further investigating this error, i read that it could be a thread timeout since the default is 20minute. A worker process with process id of serving application pool was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed. Problem is I am not entirely sure if that really is the case as all other cients mostly North American, have no issues but then their uploads don't seem to go beyond 80mb and the UK client have done a 700mb upload before that i know of. We have tested 750mb before and the whole process took about 15min for upload and processing. Any help on what the real issue here might be? Thanks.

    Read the article

  • AT&T Upload Filtering?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the filenames from upper to lower case and they upload fine. If I change back to the original filename, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • Hard Drive problem: is it the SATA controller or the HDD itself?

    - by Drooling_Sheep
    I have a Samsung 1.5TB hard drive hooked up to an ECS H55H-I mini-ITX motherboard. I have XBMC 10 (modified Ubuntu 10.04) installed for use as an HTPC. The hard drive encounters occasional errors during normal use which cause it to be remounted read-only. I have updated the BIOS on the motherboard, changed the SATA cable and moved it to different ports on the motherboard, installed and re-installed the OS (including different versions of XBMC and generic ubuntu), all to no avail. I recently ran tests both with badblocks -sv and smartctl -t long. Both reported no errors. This makes me think the motherboard or SATA controller is probably the issue. Does anyone know of any further tests I can do to help narrow this down? The processor is a Core i3. I forget the model number but it's one of the 32nm ones with on-package graphics. There's no discrete video card or optical drive. The power supply is a 150W Rosewill (pretty sure) that came with the case.

    Read the article

  • File Acces\Rename Issue

    - by Moon .
    Guys, i am having serious problem with media files (only video files). When i try to rename or delete a Files or the Folders that contained media files, i get an error. "Access Denied .... Some other program might be using this file..... bla bla bla" Previously for this knida problem i had a freeware "Unlocker" that used to work. What "Unlocker" used to do was simply kill the process that is using the file. But in my case, now that is, the process using the files is "Explorer.exe" W.T.F... A strange behavior, when i copy paste the same file. The copied file, I can rename it, i can delete it, i can shit on it if i want to.... but what is wrong with the original file. Check the following list. It will give you a better idea of my situation. I am using Win XP SP 2 I have got Div Player installed (latest) The media files are from both, my hard drive, created by many of my previous window installations. And new downloaded files. So Files ownership or Privacy is not an issue. I may not be able to rename\delete those files one moment, the next moment, in the next try, i might succeed. The problem is a major F. i can't figure it out. Please F1! its getting on my nerves.

    Read the article

< Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >