Search Results

Search found 10695 results on 428 pages for 'some none'.

Page 302/428 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • No outbound internet connection after restarting CentOS 6.3

    - by wnstnsmth
    After restarting a headless CentOS 6.3 machine, it lost outbound internet connectivity, i.e. I can still connect to the server via SSH (ssh root@**.126.18.56), but stuff such as ping google.com gives google.com: unknown host, and yum list some_package gives a lot of network errors. This is what ifconfig gives: eth0 Link encap:Ethernet HWaddr 00:25:90:78:2D:5D inet addr:**.126.18.56 Bcast:**.126.18.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe78:2d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75594 errors:0 dropped:0 overruns:0 frame:0 TX packets:787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7074741 (6.7 MiB) TX bytes:144391 (141.0 KiB) Interrupt:20 Memory:f7a00000-f7a20000 eth1 Link encap:Ethernet HWaddr 00:25:90:78:2D:5C UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:16 Memory:f7900000-f7920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:504 (504.0 b) TX bytes:504 (504.0 b) I have absolutely no clue how to debug this, and I find it very strange since I can still connect via ssh. EDIT: Weirdly, /etc/resolv.conf does not contain any entries, or none that I can make sense of: # Generated by NetworkManager search sui-inter.net # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com So is it possible that rebooting the server erased that file? It worked before at least! And how do I solve this? By the way, pinging an IP address works.

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • DNS redirecting to Apache

    - by leo
    I have CentOS installed on LVM, that is on Debian. There are BIND and Apache on CentOS. I need to access sites from browser on Debian with names like: 1.domain, 2.domain, etc. So I set up Apache and I can access these sites, but using /etc/hosts/ on Debian. And now I'm trying to configure bind. named.conf: zone "domain" IN { type master; file "/var/named/domain.zone"; allow-update { none; }; }; 192.168.100.1 is DNS' ip; 192.168.100.139 is Apache ip; domain.zone: $TTL 86400 @ IN SOA domain. root.domain. ( 100 1H 1M 1W 1D ) @ IN NS ns1.domain. @ IN A 192.168.100.139 ns1 IN A 192.168.100.1 WWW IN A 192.168.100.139 1 IN A 192.168.100.139 2 IN A 192.168.100.139 www.1 IN A 192.168.100.139 www.2 IN A 192.168.100.139 Also, is it necessary to configure 100.168.192.in-addr.arpa? Please, explain me where I'm wrong.

    Read the article

  • Copied a file with winscp; only winscp can see it

    - by nilbus
    I recently copied a 25.5GB file from another machine using WinSCP. I copied it to C:\beth.tar.gz, and WinSCP can still see the file. However no other app (including Explorer) can see the file. What might cause this, and how can I fix it? The details that might or might not matter WinSCP shows the size of the file (C:\beth.tar.gz) correctly as 27,460,124,080 bytes, which matches the filesize on the remote host Neither explorer, cmd (command line prompt w/ dir C:\), the 7Zip archive program, nor any other File Open dialog can see the beth.tar.gz file under C:\ I have configured Explorer to show hidden files I can move the file to other directories using WinSCP If I try to move the file to Users/, UAC prompts me for administrative rights, which I grant, and I get this error: Could not find this item The item is no longer located in C:\ When I try to transfer the file back to the remote host in a new directory, the transfer starts successfully and transfers data The transfer had about 30 minutes remaining when I left it for the night The morning after the file transfer, I was greeted with a message saying that the connection to the server had been lost. I don't think this is relevant, since I did not tell it to disconnect after the file was done transferring, and it likely disconnected after the file transfer finished. I'm using an old version of WinSCP - v4.1.8 from 2008 I can view the file properties in WinSCP: Type of file: 7zip (.gz) Location: C:\ Attributes: none (Ready-only, Hidden, Archive, or Ready for indexing) Security: SYSTEM, my user, and Administrators group have full permissions - everything other than "special permissions" is checked under Allow for all 3 users/groups (my user, Administrators, SYSTEM) What's going on?!

    Read the article

  • Reread partition table without rebooting?

    - by Teddy
    Sometimes, when resizing or otherwise mucking about with partitions on a disk, cfdisk will say: Wrote partition table, but re-read table failed. Reboot to update table. (This also happens with other partitioning tools, so I'm thinking this is a Linux issue rather than a cfdisk issue.) Why is this, and why does it only happens sometimes, and what can I do to avoid it? Note: Please assume that none of the partitions I am actually editing are opened, mounted or otherwise in use. Update: cfdisk uses ioctl(fd, BLKRRPART, NULL) to tell Linux to reread the partition table. Two of the other tools recommended so far (hdparm -z DEVICE, sfdisk -R DEVICE) does exactly the same thing. The partprobe DEVICE command, on the other hand, seems to use a new ioctl called BLKPG, which might be better; I don't know. (It also falls back on BLKRRPART if BLKPG fails.) BLKPG seems to be a "this partition has changed; here is the new size" operation, and it looked like partprobe called it individually on all the partitions on the device passed, so it should work if the individual partitions are unused. However, I have not had the opportunity to try it.

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Despeckle line art

    - by Dour High Arch
    We have a number of line-art charts unfortunately saved as JPEGs. They are now riddled with distracting compression artifacts or "speckles". Is there any way of removing these? I do not have the original files and it will be very difficult to recreate them. I am running Windows 7 and tried Paint.Net; none of the filters help. Posterize washed out all the colors and leaves the speckles. Blur makes text unreadable. Noise Reduction wrecks antialiasing of curved lines, and perversely enhances the speckles, making them look like checkerboards. Yes, I have Googled for software to do this; there are many programs that advertise despeckling but, after my experience with Paint.Net, do not want to experiment with applications that show no before and after images. The only example I have seen that does what I want is from a Photoshop tutorial. I have dozens of files and the tutorial requires considerable manual fine-tuning. I would prefer to automate or batch-process this task. Commercial apps are fine, but I do not want to spend over $600 and learning a complex program for a single task.

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • How can I disable flashing icons on Windows 7 taskbar?

    - by Jebego
    I set my Windows 7 taskbar to auto-hide. However, sometimes when a program changes or something new happens in a program, the taskbar will show its self, and its respective taskbar icon will begin flashing orange. Here's what I'm talking about: To make the taskbar hide again, I have click on the program before I can go back to what I was doing. Anyways, I personally find this very annoying, and would love to find a way to either: Prevent the taskbar from having such alerts. Prevent the taskbar from showing its self when it has such alerts. I've searched around quite a bit, and really only found answers to this for XP. I've also found another Stack Exchange Question looking for the same thing for Windows 7. However, none of the answers to the question were really what I'm looking for. I'm not looking to hide the taskbar, or control the number of flashes. However, this answer seems to be what I'm looking for, so I downloaded and tried out the program. It works perfectly, other than the fact that the start menu icon is always shown, regardless of the taskbar being set to auto-hide. So, any ideas on how to fix this problem?

    Read the article

  • Are there any benchmarks showing difference between hardware virtualisation enabled/disabled?

    - by Wil
    I have a 13" sub-laptop/large-netbook, it has an AMD Athlon Neo X2 L335, and I chose this one because it supports hardware virtualisation. In the end, I hardly do any virtualisation on it, however, when I do... it is fast. To my shock, I went in to the BIOS and saw that virtualisation was disabled! I turned this on and, I see no speed difference.... or at least none that I can tell. I do not have time to do a full set of benchmarks - and I run quite a bit of software on the host, so it wouldn't be scientific. I have searched quite a few places and I just can not find any benchmarks showing the difference of virtualisation bit enabled/disabled on the same hardware. Does anyone have any benchmarks they have seen that they can share? In addition, I know there was an uproar a while ago as Sony disable the hardware virtualisation on some models and only offer it in their higher models as a premium feature, however, apart from forcing an up-sell, are there any benefits to having it disabled e.g. battery/heat? I just can't find any information and can't work out why it would be disabled by default. Edit--- To add, The only thing I can find is that without it, you can not perform x64 virtualisation as fast. This is the only down side I can find. However, if this is the only difference, then I am still interested in the second part of the question - why offer the option to disable it?

    Read the article

  • Can I set up arbitrary filesystem redirection in Windows?

    - by Jon
    I am sitting in front of a Windows 7 machine that has no drive Q:. Is it possible to arrange for accesses to Q:\somedir to be redirected to an arbitrary location on the existing filesystems (for example, C:\Windows)? I would especially like a "set it and forget it" option, if one exists. I am assuming (although I have not tried it) that it is possible to use SUBST to mount an existing (empty, created for this purpose) folder as drive Q: and then MKLINK /J to create a directory symbolic link from Q:\somedir to wherever I want. However, this approach has a couple of drawbacks that I would like to avoid if possible: The drive Q: will be visible in the system. It is not as clean as I would like (removing the mounted folder will break it; a batch script needs to be manually added to the system startup). Is there a better option? If there is none and I am forced to make compromises, what is the closest I could get to the ideal solution? Assume anything is up for discussion.

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • Network bandwidth usage dashboard?

    - by SkippyFlipjack
    I have a couple of wifi access points hooked up to my home network, one of which I keep unsecured for some development I do; there are only a couple other homes within range and they've got their own wifi so it's not a big concern. I also have a Sonos system, Tivo, Roku, a couple laptops, a couple phones, an iPad and a desktop machine, all of which are internet-smart. So when my internet bandwidth tanks and it takes five minutes to load a YouTube video, I want to know what's going on, and there are many potential culprits. I'd like to be able to plug my MacBook into the primary router and see a nice little dashboard of the units on the network and what kind of bandwidth each is using at that moment. I could figure this out from WireShark or tcpdump but figure there has to be an easier way. I've tried a few different commercial products but none really presented the right info. Suggestions? (This may be a question for superuser since my Apple Time Capsule's SNMP capabilities are limited, but I figure admins of small business networks would have dealt w/ the same issue..)

    Read the article

  • Remote Desktop connection to vista vs. xp

    - by CMP
    I am trying to log into my work computer remotely. I am using Windows 7 on my laptop. I have created a vpn connection to the network, and I am doing a remote desktop connection directly to the ip of my box (192.168.xxx.yyy). If I do a remote connection to a different box, running xp, it goes into remote desktop mode immediately and I see the windows login dialog as I am used to seeing. If I try remoting to my box, which is running vista, I do not see the remote desktop mode, but an additional dialog on my local machine asking for my credentials. It defaults in my local username. It allows me to log in as a different user, but the domain it has is still my local domain, not my work domain, so none of my usernames or passwords work. There doesn't appear to be a way to change the domain. Trying to hit several more boxes, it appears to act differently on xp and vista target machines. I feel like this must be a configuration issue, but I am not sure what the problem is. Any idea on how I can connect?

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • How to fix UNMOUNTABLE_BOOT_VOLUME (0x000000ED) on my Windows XP DELL laptop?

    - by Neil
    I have a Dell Latitude D410. Running Windows XP. I am receiving the STOP: 0x000000ED (0X899CF030,0XC0000185,0X00000000,0X00000000) Blue screen. Initially, I tried everything specified with the Microsoft KB articles. At this time, I was able to boot into the general safemode. I pulled the hard drive and was able to run chkdsk on it- it noted that it had fixed some errors, but I was still unable to boot. I put a brand new hard drive in the laptop. Windows XP installation worked up until the reboot, at which time the exact same error message came back up. What I have tried (all since the new hard drive was installed): chkdsk /R All suggested solutions in Microsoft KB articles Reseating RAM Opened laptop, reseated all connectors, looked for signs of damage (saw none) Reset BIOS options to default Ran the basic Dell diagnostics I have looked at the current entry:How can I boot XP after receiving stop error 0x000000ED - I am currently in the process of downloading the Ultimate Boot CD to use as a test, but I am not holding out a lot of hope as I really doubt this brand new Hard Drive is bad. Can anyone think of other areas I am missing? Ran MEMTEST86+ V4.10 for 15 passes (overnight). 0 Errors EDIT: FORMATTING

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >