Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 207/866 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • reverse proxy http to tomcat

    - by John Q
    I've configured an Apache server with SSL and reverse proxy to a tomcat <VirtualHost domain.com:1443> [...] ProxyRequests Off ProxyPreserveHost On ProxyPass / http://local.com:8080/ ProxyPassReverse / http://local.com:8080 SSLEngine on [...] </VirtualHost> Tomcat is listening on 8080. The issue is that the app on tomcat is redirecting the request (HTTP 302 Moved temporairly). For example, if I use the URL https:// domain.com:1443/folder, reverse proxy launch the request http:// local.com:8080/folder, then, the app redirect to "/subfolder", so the final request is: http://domain.com:1443/folder/subfolder. Result is a 400 Bad request error code, as the request is HTTP on my SSL port. Do you know how I can fix this issue ? Thanks in advance.

    Read the article

  • How to find video in Firefox cache?

    - by Alegro
    I'm trying to find youtube video (just watched) in Firefox cache folder, but I cant find the folder. win xp sp3 Firefox 16.1 I tried C:\Documents and Settings\eDIN\Local Settings\Application Data\Mozilla\Firefox\Profiles\xp44aixq.default\Cache Also C:\Documents and Settings\eDIN\Local Settings\Application Data\Mozilla\Firefox\Profiles\49mvq84u.default\Cache In this folder I found the png thumbnail of visited youtube page C:\Documents and Settings\eDIN\Local Settings\Application Data\Mozilla\Firefox\Profiles\49mvq84u.default\thumbnails But, there is no video file. I also searched all files and folders arround (Default user, All users...etc). There is only one win user.

    Read the article

  • Installing php(suexec) for Apache

    - by John
    I've got Apache installed and running but how do i install and run PHP as fastcgi so it runs as its own user? here is my apache config: ./configure --prefix=/usr/local/apache2 --enable-rewrite=shared --enable-so --enable-suexec --disable-asis --disable-autoindex --enable-cache --enable-deflate --enable-disk-cache --enable-expires --enable-file-cache --enable-mem-cache --enable-ssl --enable-vhost-alias --with-mpm=prefork --with-port=8080 here is my php config: ./configure prefix=/usr/local/php --without-pear --enable-safe-mode --enable-magic-quotes --with-apxs2=/usr/local/apache2/bin/apxs --disable-cli --disable-cgi --enable-force-cgi-redirect --enable-fastcgi --with-mysql --with-gd --with-jpeg-dir=/usr/lib --with-png-dir=/usr/lib --with-freetype-dir=/usr/lib --enable-calendar --with-curl --enable-mbstring --with-mcrypt

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Bind is updating DNS with wrong resolver info after rndc flush expires

    - by RussH
    I'm running Bind 9 on a small office server (Centos 5) with a local mailserver. There's a domain we need to email - it shows wrong DNS info for the MX servers using the local bind, so an RNDC flush updates this correctly - but then, after some time - it reverts back to the wrong resolver. I would have thought that 'rndc flush' just clears the local cache and pulls all the authoritative info down - so why is it being overwritten (by what seems like the next update)? where do I need to look? Presumably there's some named logging(?) to determine where it's getting the [new or cached] updates from?

    Read the article

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Cannot connect to IIS 7 from localhost

    - by Wout
    I cannot connect to the local IIS 7 using "http://localhost" in IE. "http://127.0.0.1" doesn't work either. The strange thing is that if I add a binding on e.g. port 81, then I can reach "http://localhost:81". Also turning off the firewall on the local machine doesn't help. The site is reachable from the internet. The local requests don't seem to hit IIS (no entries in the IIS log files). IIS is hosted on Windows Server 2008 R2 from behind a hardware firewall device. Note that I'm a programmer, not a network administrator, so I'm having a hard time trouble shooting this.

    Read the article

  • Upgrading to 12.10 on an external hard drive

    - by Tom Childers
    I did some googling on this and didn't find anything specific for my situation. I currently have 12.04 installed on an external USB hard drive. It's working great. I want to upgrade it to 12.10. My bandwidth is very limited so I have a friend who will download 12.10 for me and put it on a flash stik. Then I can upgrade without having to do the download myself. Which particular version of the 12.10 download file(s) should I get? Are there alternate 12.10 downloads that have all the packages? How do I set it up so when I upgrade 12.04 I can specify that it look in some local repository for the 12.10 files? Can I just dump the 12.10 files in some local directory? Or do I have do go thru some complex commands to create a local repository? I'm pretty new to Linux so a long process of complex terminal commands will probably be a show stopper for me. Remember that my 12.04 install resides on an external hard drive. And I have a laptop with multiple USB ports. Thanks! Advait

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • Let CGI-PHP load a non-default shared library.

    - by ralle
    In Apache2 I configured PHP as CGI in a virtual host: SetEnv PHPRC "/usr/local/php5.3" ScriptAlias /php5.3 "/usr/local/php5.3/bin" Action application/php5.3 /php5.3/php-cgi AddType application/php5.3 .php Everything works fine. Now I have some issues with the standard version of the GD because it restricts me in settings several hinting and anti-aliasing stuff for fonts. Therefore I want to modify the GD source and create a new shared library. Since I don't want a modifed library in my system I want only PHP to use that library. My question now: How can I change the Apache configuration in a way that PHP uses a certain new version of the library? Something like this does not work: ScriptAlias /php5.3 "LD_LIBRARY_PATH=/path/to/my/lib:$LD_LIBRARY_PATH /usr/local/php5.3/bin"

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • Cannot access folder locally, but can remotely

    - by Cylindric
    I'm having a peculiar problem on one of my servers at the moment, which seems to be related to authentication in some way, but I have no idea how to find the root of the problem. I have a folder on the server D:\Somefolder\Logs. If I am connected to the server via an RDP terminal, so essentially "local", I cannot access that folder - I get an access denied. If I am connecting from my machine to it using the share \server\d$\Somefolder\Logs, I can access it. I'm logging in to both machines as the same user. Permissions on the folder seem quite simple, CREATOR OWNER, SYSTEM and Administrators. I am a Domain Admin, and they are in the local "Administrators" group. It is also affecting things like access to SQL Server, so I don't think it's a simple folder-permissions thing. For example, I cannot connect SQL Management Studio to all the local SQL instances using a domain account, but I can if I connect remotely to it.

    Read the article

  • Windows Xp, Svchost.exe connecting to different ips with remote port 445

    - by Coll911
    Im using Windows Xp professional Sp2 Whenever i start my windows, svchost.exe starts connecting to all the possible ips on lan like from 192.168.1.2 to 192.168.1.200 The local port ranges from 1000-1099 and the remote port being 445. After its done with the local ips, it starts connecting to other random ips. I tried blocking connections to the port 445 using the local security polices but it didn't work Is there any possible way i could prevent svchost from connecting to these ips without involving any firewall installed ? since my pc slows down due to the load I'd be thankful for any advices

    Read the article

  • Getting some perl script errors on execution of svnnotify

    - by user2474633
    I installed svnnotify:2.84, perl module 5.10 for subversion 1.7.11 on redhat release 6. And i am using this command in post-commit hooks to get notified svnnotify --repos-path "$1" --revision "$2" --from [email protected] \ --to-regex-map [email protected]="branches/Test_branch12" \ --smtp xxxxxx.com HTML::ColorDiff >> /tmp/notify.txt 2>&1 once the commit is successful i can see the below mentioned error in the output file. Use of uninitialized value $[0] in exec at /usr/local/share/perl5/SVN/Notify.pm line 2332. Can't exec "": No such file or directory at /usr/local/share/perl5/SVN/Notify.pm line 2332. Use of uninitialized value $[0] in concatenation (.) or string at /usr/local/share/perl5/SVN/Notify.pm line 2332. Cannot exec : No such file or directory Child process exited: 512 Can anyone help on this.

    Read the article

  • Windows Vista Backups?

    - by skaz
    I am trying to configure a Windows Backup on Vista but don't see some capabilities I would expect to be there. For one, it looks like I can only select a Local Drive or a network share. I want to use a local drive, but I want to use a sub folder of one of the drives. Must I really pick the root? As a work-around, I made a network share to the local drive, thinking I could then pick network share. However, when I do this, I am prompted for credentials to hit the share, and none work. However, the share works Explorer, and it works from other computers, so the access is configured correctly. Is there any way to do what I am trying to do? Thanks.

    Read the article

  • How to configure DNS BIND to work locally on one computer?

    - by user619656
    I want to do some changes to the BIND source code. In order to test those changes I want to be able to post queries to my local BIND server and for it to use only the local zone files. I know how to make the zone files and somewhat the named.conf file but what should i put in /etc/resolv.conf? In resolv.conf currently there is the line nameserver 192.168.0.1 witch i guess is my router IP address and the queries go through the router to my ISP. I want those queries to go to the local BIND server and to look for answers in the zone files i provided. Is there a way for this using resolf.conf file or should i do something else?

    Read the article

  • Cannot get script to run at startup (tried all the simple answers)

    - by Carey Head
    I have Ubuntu Desktop 12.04 LTS running great on an older Acer desktop. I want to use this machine as an in-home server for hosting Minecraft. The command to start the Minecraft server is java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui and that works great when I cd into the correct directory and execute the above. I created a script to do this: #!/bin/bash cd /home/myuser/minecraft-server1 java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui & cd /home/myuser/minecraft-server2 java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui & exit 0 I made this .sh file executable, and it too runs great when I start it manually from the terminal. The problem I'm having is getting these to execute at startup. I have my user account on this machine to auto login. I have tried the following: Adding the following to "Startup Applications" : sh /home/myuser/myscript.sh (Nothing happens on reboot) Adding the same to /etc/rc.local (Nothing happens on reboot). I even tested this one by running /etc/rc.local from the terminal, and it executed great. Just not at boot/auto login Added the lines from the script directly to rc.local (Nothing happens on reboot). I can't help but think that there's something I'm missing. The script executes great when run manually, but will not run at boot/auto login. Many thanks in advance.

    Read the article

  • Mac OS X 10.6 executable not found without full path

    - by Danack
    I just installed Apache via MacPorts. It seems that my Mac was absolutely confused about which version of the Apache executables to run. After moving the Apache executables that ship with the Mac to a directory that is not listed in the PATH variable, trying to run the httpd built by MacPorts fails even though the correct directory (/opt/local/apache2/bin) is listed in the PATH variable. If I navigate to the directory /opt/local/apache2/bin and type the command httpd I still get the error message -bash: httpd: command not found If I type the command with the full path /opt/local/apache2/bin/httpd it works fine. I've run the command alias to see if something was clashing but the only thing listed is: alias wget='curl -O' How do I find what is intercepting the command and preventing the executable being found in the directory, even when I'm inside the same directory? By the way, the httpd file is executable: -rwxr-xr-x 1 root admin 442496 9 May 2012 httpd

    Read the article

  • I can not cd "LaunchAgents" on macbook

    - by why
    after installing mongdb on my macbook-pro, it tells me: If this is your first install, automatically load on login with: cp /usr/local/Cellar/mongodb/1.6.3-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist If this is an upgrade and you already have the org.mongodb.mongod.plist loaded: launchctl unload -w ~/Library/LaunchAgents/org.mongodb.mongod.plist cp /usr/local/Cellar/mongodb/1.6.3-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist Or start it manually: mongod run --config /usr/local/Cellar/mongodb/1.6.3-x86_64/mongod.conf but after i copy org.mongodb.mongod.plist to ~/Library/LaunchAgents, it tells me launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist launchctl: Couldn't stat("/Users/liuqiang/Library/LaunchAgents/org.mongodb.mongod.plist"): Not a directory and also i can not cd "~/Library/LaunchAgents", but i can ls the directory! "~/Library/LaunchAgents" is a strange directory in mac.

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • Setting up a LAMP VM server for Development and Testing?

    - by TdotThomas
    Info: I would like to set up a VM server on my local computer which will serve pages in the exact same way as my current hosting (but only to me on my local computer). I currently pay a big web hosting company to host my website & web store and they are doing a great job, but I would like to be able to work on my Web site and its corresponding MySQL DB, HTML, and PHP code without being at risk of messing something completely up on the live servers. My current plan of action: Set up a VM webserver with Debian, MySQL, PHP, Apache. Copy web store (PHP/HTML) code to VM server. Copy my current MySQL databases from my hosting provider and install on VM server. Modify and test new features on VM server. Upload MySQL DB and HTML/PHP code back to web host's server where it should work as before but with new modifications. Questions: Now I'm pretty sure I have steps one and two down correctly but I can't for the life of me figure out how to proceed next, so here are my questions. I have my /etc/host file set up so www.MySite.test redirects to the IP address of the local VM webserver. Once I import my PHP/HTML files and MySQL file whats the best way to navigate around the fact that all of my files and DBs will reference www.MySite.com. I can export my MySQL dbs but do I also have to export my MySQL users and passwords to access those db or are those coded into my html/php code?

    Read the article

  • Can't connect nonlocally after 12.10 upgrade

    - by user101815
    I've just upgraded one of my systems from 12.04 to 12.10. Now I can't connect on that system beyond my local network. Connections within the local network seem to work fine, and I can make nonlocal connections from other machines (like the one I'm asking this question from). I suspect that some routing information has been messed up, but I don't know where to look for it. It's not a nameserver problem -- pinging outside sites by their IP addresses doesn't work either. I have another laptop next to this one, also running Kubuntu 12.10. On the one that can't connect, arp produces no output. On the other one, it produces 192.168.0.1 ether 00:23:69:fa:ce:ae C wlan0 On the working machine, the output of netstat starts with some tcp entries. On the nonworking one, those entries are absent. I asked this question on the Ubuntu forum but haven't gotten any answers there. One further complication: since the troublesome machine has no outside connection, it's extremely difficult to download anything to it. For what it's worth, "ping 8.8.8.8" produces "connect: Network is unreachable". Update: after a lot of fiddling, I have my external world back. I don't know what the key action was, but the first indication of progress was that "ping 8.8.8.8" worked. At that point I still didn't have a working nameserver, so external URLs didn't work. But I did this (based on an online post, of course): sudo dpkg-reconfigure resolvconf and answered Yes to all prompts. That did the trick!! Apparently my problem was unique, or close to it, since I couldn't find any online references to it: local net working, remote net not working, including explicit IP addresses. So I suppose that if no one else has this problem, no one cares about the solution!!

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >