Search Results

Search found 6992 results on 280 pages for 'exist'.

Page 196/280 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • redmine gives 404 error after installation

    - by Sankaranand
    I am using Debian squeeze with nginx and mysql. After raking db and loading default data to redmine. When i try to visit redmine in a browser, http://ipaddress:8080/redmine, I get a 404 error Page not found The page you were trying to access doesn't exist or has been removed. My nginx configuration file, below: server { listen 8080; server_name localhost; server_name_in_redirect off; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } location ^~/phpmyadmin/ { root /usr/share/phpmyadmin; index index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/$fastcgi_script_name; } location /redmine/ { root /usr/local/lib/redmine-1.2/public; access_log /usr/local/lib/redmine-1.2/log/access.log; error_log /usr/local/lib/redmine-1.2/log/error.log; passenger_enabled on; allow all; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } location /phpMyadmin { rewrite ^/* /phpmyadmin last; } I don't know what the problem is - this is my second attempt to install redmine in Debian with nginx.

    Read the article

  • Virtualbox Headless Server on Ubuntu missing VRDP Options

    - by The Daemons Advocate
    I'm running VirtualBox headless server on an Ubuntu 64 bit host, and I want to use it remotely. However, I'm having problems connecting via RDP. The DNS names in my network show the host to be 'server', and the guest to be 'ubuntu-vm'. From the official documentation, I gather that I am to connect to server on the default RDP port in order to see the guest machine. I start the virtual machine like so: vboxheadless -startvm My_VM Then I connect on my laptop, and I get... rdesktop -a 16 server ERROR: server: unable to connect So next I consult the documentation further, and I find there are RDP flags that can be turned on (but should be on implicitly for a headless server). So I pull up information using 'vboxmanage showvminfo My_VM', and I find the VRDP property is off. VRDP Connection: not active To make things even weirder, RDP flag seems to be missing from vboxmanage. I've installed straight from the ubuntu repo's using the virutalbox-ose package, not sure how that measures up against the official docs. For instance, this command doesn't exist: VBoxManage modifyvm My_VM --vrdp on From the UI, the VM's Settings regarding Display have greyed out the 'remote Display' option. What I'm looking for is advice :). I'm open to suggestions that don't involve starting again with something like VMWare. Thanks in advance!

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • Retail windows xp prof sp 2 Lost key but have the genuine cd , what to do?

    - by AdityaGameProgrammer
    I had recently formatted my system only to find out i have lost the cd key to my original cd. i had used the option to enter the product key later. Yes, i know its a stupid thing to do but i bought the cd in 2008 from a retail store and i lost the original packaging. the actual label on the cd is includes service pack version 2002 .@2004 microsoft corporation reserved. There are some numbers on the back side of the cd in the inner ring. i cant for the life of me figure out how what is the use of the genuine cd i have with me when i cant seem to activate it. what exactly is the advantage of having the original cd in your possession in situations like this?. i have tried the unattend.txt and it doesnt contain the correct key. and there does not exist any winnnt.sif file in the cd. where on the cd or in it can i find the product id information i stay in india . and my attempts at trying the microsoft support site keeps getting me directed to page which says they had stopped support for windows xp in 2011. lets say by some miracle i do contact microsoft. what information would i have to provide them? and would they be giving me the product key for my cd key from their database? or a new key?

    Read the article

  • VMware Workstation executes nonexisting and outdated File

    - by RED SOFT ADAIR-StefanWoe
    I execute a command line program from a VM (VMware 7.1.1) with Windows XP. The executable file is located on the host machine. If i start a command line in the VM, using a drive mounted as .host\SharedFolders i see the following: D:\projects\myProgram\WinRel>dir myProgram.exe 02.09.2010 21:15 245.760 myProgram.exe D:\projects\myProgram\WinRel>myProgram.exe Processing BuildFeb 26 2009 This is wrong! The whole execution of the program behaves like the version that is outdated more than one year! I triple checked that there is no confusion or anything If i start the Program on the host or if i even start it from the VM using a UNC Path, it shows the last build date and executes as expected: C:\>dir \\myMachine\drive_d\projects\myProgram\WinRel\myProgram.exe 02.09.2010 21:15 245.760 myProgram.exe C:\>\\myMachine\drive_d\projects\myProgram\WinRel\myProgram.exe Processing Build: Sep 2 2010 Can this behavior somehow be explained? There MUST be a cache for the host mounted drive. The program it executes does not exist anymore! If i remove it from the host, the VM can not execute it anymore. If i restore it, the behavior becomes the same again.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Debian's Wordpress with broken plugin path?

    - by Vinícius Ferrão
    I've installed an Wordpress from Debian Wheezy package system and the plugins folder appears to be broken. As stated in the error log files of Apache2: [error] File does not exist: /var/lib/wordpress/wp-content/plugins/var The plugins are looking for an URL based on the full path, and not on the relative path. I can "temporary fix" the problem making a symbolic link to /var on the plugins folder, but I know that this is wrong and dirty. I don't know where to start debugging this. So any help is welcome. Additional information: /etc/wordpress/htaccess # Multisites generated htaccess RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Apache2 Configuration File: <VirtualHost *:80> Alias /wp-content /var/lib/wordpress/wp-content DocumentRoot /usr/share/wordpress ServerAdmin [email protected] <Directory /usr/share/wordpress> Options FollowSymLinks AllowOverride Limit Options FileInfo DirectoryIndex index.php Order allow,deny Allow from all </Directory> <Directory /var/lib/wordpress/wp-content> Options FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost> Thanks in advance,

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Exporting Client Data from Groupwise 6.5 to Outlook 2010 without Crashing

    - by Adam Doherty
    My employer has recently moved from Novell GroupWise 6.5 to Exchange 2010. We've imposed mailbox limits on staff but we still need to move their old messages, contacts, calendars, etc. over to Outlook 2010. Our problem however is this, utilizing the Novell MAPI client is slow within Outlook 2010 and upon exporting messages to a PST file (for later re-attachment, and offline backup purposes) crashes the GroupWise server. Connecting to the server in Outlook via IMAP to export messages to PST is faster and apparently more stable but also crashes the server. We'll be keeping our GroupWise server online internally until then end of the year but I have staff with mailboxes approaching 12 gigabytes, which is fine if we're going to move the data to offline storage (DVD set) but if I keep crashing the server every time I try to get the data I'll just be spinning my wheels. In my first attempts, I tried to move mail for a staff member with 3GB of data. The transfer lasted roughly 8 hours before crashing. I'm wondering if there is an open source solution to my problem. Paid solutions exist but we're a not-for-profit organization and have too many staff to justify the costs of per seat licenses just to migrate mail.

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Slow upload speeds with pfsense virtual appliance

    - by Justin Shin
    I have a pfSense virtual appliance set up in front of a Windows server. The pfSense appliance has been configured with two L2L IPSec VPN sites and not too much else. The appliance has two vNics which both exist on the same VLAN, but one is "WAN" and the other is "LAN." When I run speedtest.net on my Windows server when I have configured it to use a static WAN address and gateway, I get great speeds - maybe around 50 down, 15 up. However, when I configure it with a private IP address, I get similar download speeds but terrible upload speeds - around 2 or 3 Mbps consistently. I used Wireshark to see what gives but there didn't appear to be too much helpful information there, or I just could not find it. Besides the L2L VPNs, other configurations include: Automatic Outbound NAT Virtual P-ARP IP for the Windows Server WAN Firewall rule to allow * to * on RDP WAN Firewall rule to allow * to * (enabled this just for testing... didn't help!) No DHCP or any other services besides IPSec VPN No Errors LAN or WAN No collisions LAN or WAN I would be happy to post the full config file if it would help. I've been scratching my head at this one all day!

    Read the article

  • Conditionally changing MIME type in nginx

    - by Peter
    I'm using nginx as a frontend to Rails. All pages are cached as .html files on disk, and nginx serves these files if they exist. I want to send the correct MIME type for feeds (application/rss+xml), but the way I have so far is quite ugly, and I'm wondering if there is a cleaner way. Here is my config: location ~ /feed/$ { types {} default_type application/rss+xml; root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } location / { root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } My questions: Is there a better way to change the MIME type? All cached files have .html extensions and I cannot change this. Is there a way to factor out the if conditions in /feed/$ and /? I understand that I can use include, but I'm hoping for a better way. Putting part of the config in a different file is not that readable. Can you spot any bugs in the if conditions? I'm using nginx 0.6.32 (Debian Lenny). I prefer to use the version in APT. Thanks.

    Read the article

  • Ubuntu 12.04 cloud edition on Amazon - Apache2 - /etc

    - by jdog
    I have setup a web server on Amazon with 3 Virtual hosts. For some reason I can't get any of the sites going on it, they all show a 404 error. /var/log/apache2/error.log shows "File does not exist: /etc/apache2/htdocs" I have checked: a2ensite all my virtual hosts actually checked softlinks in sites-enabled access rights in /var/www to 777, in case user is not www-data grep -r htdocs /etc/apache2 (returns nothing) ports.conf has NameVirtualHost directive exactly matching Virtual Hosts What else could this be? ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost 107.20.169.163:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-available/www.seleconlight.com <VirtualHost 107.20.169.163:80> ServerName www.seleconlight.com DocumentRoot /var/www/www.seleconlight.com CustomLog /var/log/apache2/www.seleconlight.com-access.log combined ErrorLog /var/log/apache2/www.seleconlight.com-error.log </VirtualHost>

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • htaccess not properly rewriting urls

    - by Cameron Ball
    This is a bit of a weird one. I'm doing some work on a server, and I need rewrite rules for directories that actually exist (in some cases, they are more than one level deep) At the moment my .htaccess looks like this: RewriteEngine on RewriteRule ^simfiles/([-\ a-zA-Z0-9:/]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] And this is working OK, for example, a url like: mydomain.com/sifmiles/my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files Or in the case of a directory structure that is deeper than one level: mydomain.com/sifmiles/my-files/more-of-my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files/more-of-my-files I wrote the regex so that it won't match things with a . in the path, because there are css and js files which reside in simfiles/somedirectory, and if I redirect everything then these cannot be loaded. I tried a configuration like this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^simfiles/([-\ a-zA-Z0-9:/\.]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] But that doesn't work, things still don't load properly. So my first question is, how can I achieve this "properly"? I don't like my solution because it means redirects won't occur if the folder has a . in its name. My second problem, is that while the redirection is happening properly, the url becomes: http://mydomain.com/?portal=simfiles&folder=my-files I want the URL to remain clean, like: http://mydomain.com/sifmiles/my-files How can I achieve this?

    Read the article

  • How do I use postfix aliases in cyrus?

    - by Nick
    I have a cyrus mailbox called user/nrahl. If I use the 'mail' command, from the server itself, and type: mail nrahl to send a message, the message magically shows up in my Thunderbird IMAP inbox. But I need to get message from a POP3 account into Cyrus for delivery, and the messages comming in are addressed to "[email protected]". I have fetchmail setup and running, and it's downloading messages from the POP3 account, and passing them into Postfix. Postfix (now that I've got aliases set up in /etc/alias) is accepting the message, and passing it to the Cyrus socket. But here's the problem: Cyrus is rejecting the message with a 550 - mailbox unknown error. The actual message in /var/log/mail.log is: Apr 17 16:56:57 IMAP cyrus/lmtpunix[5640]: verify_user(user.fetchmail) failed: Mailbox does not exist Apr 17 16:56:57 IMAP postfix/lmtp[5561]: CFFD61556BD: to=, relay=localhost[/var/run/cyrus/socket/lmtp], delay=0.08, delays=0.07/0/0/0.01, dsn=5.1.1, status=bounced (host localhost[/var/run/cyrus/socket/lmtp] said: 550-Mailbox unknown. Either there is no mailbox associated with this 550-name or you do not have authorization to see it. 550 5.1.1 User unknown (in reply to RCPT TO command)) It looks like it's trying to forward all of nrahl's mail to postfix@localhost, instead of nrahl@localhost, and I don't know why. I need it to forward mail addressed to [email protected] into Cyrus's "nrahl" mailbox.

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • IDE/PATA high-speed hard drive dock

    - by wfaulk
    I frequently need to access bare drives for backups and need a quick, high-speed way to deal with them. There are a multitude of SATA hard drive docks (for example), but I have a lot of IDE/PATA (hereafter "IDE") drives that I would like to be able to use similarly. There are IDE-to-SATA adapters so you can plug your IDE hard drive into a SATA port, so I don't see any reason why you couldn't use the same technology to have a native dock, yet none seems to exist. Now, I'm aware that 3.5" IDE drives do not have a specification for the layout of the connector, and therefore can't be slapped into a dock the same way a SATA drive could, but 2.5" PATA drives do. In fact, I'm not terribly interested in supporting 3.5" drives. It would be nice, but I deal with them far less frequently than 2.5" drives. Also, I'd very much like for the connection to the computer be faster than USB, preferably eSATA, I don't want to be spending time mounting a drive inside an enclosure, I don't want bare drives lying around with a cable hanging off of them, and I'd prefer a single dock rather than two. What seems like the ideal solution to me would be a regular SATA→eSATA dock and some sort of screwless adapter for IDE drives, but I'm open to any suggestions, regardless of my stated preferences, but which are, in some sort of order of preference: high-speed (faster than USB, at least) holder for drive (not just a cable) no complicated enclosure support for 3.5" IDE drives single dock Updates: Here's a 3.5" IDE to 3.5" SATA docking adapter that could be part of the solution. Weird. I figured that would be the impossible part. I was hoping to find something like this 2.5" to 3.5" SATA chassis that would take a 44-pin IDE drive internally. It looks like the Vantec EZ Swap EX comes awfully close. It has its own bay dock, but it looks like the SATA ports on the back are spaced properly, even if they're not aligned quite properly. Unfortunately, the proper position is at the very edge of the drive, which means that the docks' connectors are at the very edge of their recesses, which means there's no way to fit it in there.

    Read the article

  • SQL Server 2005 - Linked Visual Foxpro Authorization

    - by John
    Here's the Scenario: We have an existing SQL 2000 Server that has a linked server to a share directory (on another server) containing Visual FoxPro tables; all connections work correctly. Porting the SQL 2000 server to a new SQL 2005 server results in questionable behavior: If you connect to the server, remotely, using Windows Authentication, you receive this error when running a query against the linked server: OLE DB provider "MSDASQL" for linked server "[linked server name]" returned message "[Microsoft][ODBC Visual FoxPro Driver]File 'MyTable.dbf' does not exist.". Msg 7350, Level 16, State 2, Line 2 Cannot get the column information from OLE DB provider "MSDASQL" for linked server "[linked server name]". However, logged in locally, the query works fine. The query also works correctly when logged in remotely, but using a SQL login. The only scenario I receive the error is when connected remotely, using windows authentication. As I mentioned before, this works on the SQL 2000 server, and both the old and new servers are running under the same network account (which has access to the folder the FoxPro files are in). Doing a little searching on the internet it looks like others have run into this situation, but I haven't found a resolution. Has anyone run into this before?

    Read the article

  • Looking for a comprehensive/"expert" guide to BCD parameters

    - by Stilez
    I'm interested in educating myself about BCD on Windows 8. There are many, many "walkthrough" guides" and "howtos", but I can't find any guides at typical "enthusiast" level covering what each option or argument in a BCD /ENUM dump might mean, and the principles governing how these all work together. Imagine trying to rebuild or debug BCD (including EFI/BIOS variants and recovery/hibernate/memtest sections, and perhaps multiple boot Windows/WinPE/WinRE) from scratch using just BCDedit + DiskPart, and trying to understand rather than just copy/pasting commands. That's roughly the knowledge I'm after. Example questions might be: How is a BCD /ENUM dump to be read, item by item? How do its sections work together? (A lot of guides only show a specific example rather than explaining all the all common args that can exist and what they mean, they don't actually explain how sections work together, or they assume MBR/BIOS/Vista/7 and omit info needed for EFI/GPT/Dynamic disks/8) Partitions are specified by volume letter or as a \Device\HarddiskVolumeNNN. Why does it sometimes show these items as a letter and sometimes as a GUID? What are the practical differences if any? What exactly is syntax like "ramdisk=[C:]\Images\winpe.wim,{ramdiskoptions}" saying, and how will the drive letter "C" be interpreted at runtime in a line like this? Is the drive in such a line always "C:" (most examples assume so) and if not, when wouldn't it be? Many websites state that an sdi device and path may be needed in some sections of BCD, but what is sdi and what are these args doing when they appear? How does the GUID to HDD volume/partition mapping work under EFI/GPT? So that if disks or partitions/volumes change it's clear how one can confirm from basic principles whether data shown in BCD /ENUM ALL is still correct or not. Does anyone know of a suitable reference source for this kind of raw BCD data and structures? Thanks!

    Read the article

  • How to fix locale settings in Debian squeeze

    - by blogjunkie
    I occasionally get locale errors and I've tried to run dpkg-reconfigure locales to fix the problem. Here's the output: :~$ sudo dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory /usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory /usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "C" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "C" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). I looked for /usr/bin/locale but it doesn't exist on my system. Do I need to create it? What do I put in there? Also I found a related question that says the cause of his problem was in the sshd_config file. The file had the following entry: AcceptEnv LANG LC_* I'm mainly concerned that it may cause problems for my VPS, otherwise if it's nothing major I'll be happy to ignore the problem. What should I do? thanks!

    Read the article

  • Completely automated DVD insert-rip-compress-eject workflow

    - by Kevin L.
    (Partially inspired by this question.) Background: I have a PC hidden away behind an HD LCD in custom-built entertainment center. The only visible part of the PC is an external DVD drive, mounted above the Wii. The PC happens to have Windows XP on it; Hackintoshing and Linux might be possible, but I've had issues with drivers for the sound card before. Let's just assume that OS X and Linux are a no-go unless they provide a truly awesome and simple solution for this particular problem. Goal: I would like to have a completely automated workflow for ripping DVDs. Something like this: Push the eject button on the DVD drive, insert the DVD. PC recognizes that this is a video DVD (as opposed to data). PC rips DVD to hard drive. PC finishes ripping, and ejects the DVD tray. PC compresses DVD image into some format that an Xbox 360 can read. PC copies finished compressed video file to a particular folder, so that it can be read into a WMP11 library and seamlessly played by the Xbox 360. PC cleans up all temporary files. Done. The impetus to have this be completely automated is that I’ll never need to switch the TV to the PC’s input and fiddle with the wireless keyboard. That’s just needless user intervention. The UI doesn’t have to be pretty. Nor do I care about speed. And I can probably bridge several of the gaps with some creative Perl use. But it seems likely that many (or all) of the parts should already exist. Any thoughts?

    Read the article

  • Fedora 17 - Dropping into debug shell after attempted partitioning

    - by i.h4d35
    So I tried creating a new partition on Fedora 17 using fdisk as follows: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (2048-823215039, default 2048): Using default value 2048 Last cylinder or +size or +sizeM or +sizeK (1-9039, default 9039): +15G Once this was done,instead of formatting the partition I created, I ran the partprobe command to write the changes to the partition table. On rebooting the computer, it drops to the debug shell and gives me the error as follows: dracut warning:unable to process initqueue dracut warning:/dev/disk/by-uuid/vg_mymachine does not exist dropping to debug shell dracut:/# While trying to run fsck on the said partition from the debug shell, it says "etc/fstab not found" and inside /etc I see a fstab.empty file. Is it now possible to retrieve what I have from the computer? Any help would be appreciated. Thanks in advance Edit: I've also tried the following steps for additional troubleshooting: I tried to boot using the Fedora disk and tried the rescue mode - says no Linux partition detected. I tried to create an fstab file by combining the entries from blkid and the /etc/mtab file and using the UUIDs from the mtab file - It didn't work. As soon as I rebooted the machine, it promptly dropped me in to the debug shell and the fstab file which i created wansn't there anymore in /etc (part of this solution)

    Read the article

  • Changing order of Thunderbird email address autocomplete?

    - by Brooks Moses
    I recently did a system wipe and installed Thunderbird 3.0, and imported all of my email setup from a previous Thunderbird 2.0 installation. Almost everything is working fine, but I'm having a problem with the autocomplete in email addresses when writing messages. The relevant behavior is this: In the old 2.0 installation, the autocomplete appeared to know which email addresses I used most frequently, and so when I typed "m" in the address line, it would pick as the default selection the "m"-person who I frequently write email to. (It's possible this is an illusion and it simply picked people in the order I added them to my address book.) Thus, I have become used to typing "m"-"enter" in the address field, and getting this person. In the current 3.0 installation, however, the autocomplete order has changed. It's not the same as it was, and it's not alphabetical. The result is that I'm spending extra time looking at the email address bar, and more annoyingly, half the time the old muscle-memory kicks in and I find myself with an email that's addressed to a couple of customers rather than to my boss and coworker. Thus, two questions: How does Thunderbird determine this autocomplete order, among a set of addresses all of which are in the same address book? How can I change this ordering to be what I want? (I have tried Google-searching, and found a number of incomplete answers, nearly all of which were for version 1.0 or thereabouts, and reference settings dialog boxes that no longer exist.)

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >