Search Results

Search found 10280 results on 412 pages for 'remote shutdown'.

Page 363/412 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Win7 taskbar freezes on startup for about 1-2 mins

    - by Mike
    Running Win7 64-bit for about 4 months now. Never had this problem, didn't install anything new recently. When I boot up I can't do anything in the taskbar, it's frozen for about 1-2 minutes then everything is normal. I can right click on my desktop and move my mouse around. This randomly just started happening a couple days ago after a reboot. I have a 3.2ghz quad, SSD, 4 gig ram, etc. and it usually starts up quickly. After some troubleshooting (including running antivirus and Anti-Malware), it doesn't appear to be software related, but appears to be services related. I can boot up in safe mode and safe mode with networking just fine. I can also boot up normally with all my regular software loading at startup, BUT with all my services turned off. Now the odd part. When I run msconfig to disable all the services at startup and go through ticking them on 5-10 at a time or so and booting up it seems to be somewhat random. Ticking everything on from "Application Experience" halfway down to about "Quality Windows Audio Video Experience" and I can boot without the 1-2 min. freeze. Then I start ticking the stuff below that from a couple of Remote Accesses to Smart Card and Task Scheduler, etc. But the weird part is sometimes it will freeze sometimes it won't. I can't narrow it down. Then if it freezes, I'll boot up in safe mode and turn the ones I just turned on back off and I'll reboot normally but it will freeze again. Which makes no sense because that configuration just worked without freezing just before. I got frustrated enough that I backed up and wiped my hard drive (formatted and everything) and reinstalled Win7 but when I booted up, the freeze happened again. Any ideas? Thanks in advance.

    Read the article

  • Install KVM based Windows 2008 remotely over SSH on a headless, no graphics Ubuntu 10.04 server?

    - by taazaa
    Hi, I have a Dell server at a remote data center with Ubuntu 10.04 as the host. It is a minimal install with the necessary virtulization packages. There is no X and the machine is headless. I have the win2008 DVD in the machine and want to remotely install it. I tried: virt-install --connect qemu:///system -n vmwin2k8 -r 1024 --disk path=server2k8.qcow2,size=50 --cdrom /dev/sr0 --vnc --noautoconsole --os-type windows --os-variant win2k8 The qcow2 image get created However I don't understand how to connect to see the install via VNC. This is my first time doing it so it may be trivial or may not be possible. Remotely I have a Win 7 machine with Putty and RealVNC viewer. Where is the graphic output of VNC going? Do I have to have VNC server running on the host or some other machine and then connect to it from my VNC client? Please let me know or point me to the right direction. I have been searching the web for several days to figure out how this is suppose to work. Thanks!

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • Cannot connect to a shared network drive

    - by dublintech
    I am using windows 7, I cannot connect to a shared network drive on another machine. I can ping the machine. I can remote desktop connect to the machine. The machine is on the same subnet My friend with the exact same laptop as me (and on the same network, same workgroup) can connect to the shared folder. The machine I am trying to connect to and my friends machine can both see shared folders on my machine. I also cannot see shared folders on the friends laptop. When I select diagnose, windows tells me nothing useful. When I select see details on the error pop up, I see: Error code: 0x80004005 (google doesn't help much) I can nbtstat -a the machine who has the shared folder. When I try with my firewall turned off the same happens. I have ensured my windows 7 has all updates. I run security essentials to ensure my laptop is clean. I run ccleaner to clean up my registry. Same error. I have tried with my laptop on both wireless and ethernet. As you can imagine, I am banging my head against the wall on this one.

    Read the article

  • Friendly Intranet Addresses

    - by Jmyster
    Relativly new to IIS. I'm attempting to set up multiple sites in my Intranet on one server. The server already has SharePoint Installed on it and has a binding *:80. So when I type //ServerName I get the home page of SharePoint. I get how that works. I set up a new site in IIS and set the Binding to *:30015. On a remote machine if I type //ServerName:30015 in a web browser, I get the new site. Awesome, working as intended. My Questions: Can/How do i set it up so that I can type //DivisionAppName or //Division.AppName and have it resolve itself to //ServerName:30015? Is this something I have to register with my Company's DNS server? I hope not, getting my corprate IT to assist is a nightmare. What I tried: I have added Bindings with the Host Name filled in with both DivisionAppName or Division.AppName and port 30015 but that doesn't seem to work.

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • Need solution for Network/Servers.

    - by rehanplus
    Dear All, Please help me. I just joined a new Hospital and want some help managing my network. There are some requirements: Current Network: There is a D.S.L connection and that is terminated on a LINUX proxy and then connected to D-Link layer 2 switches and then providing internet to more then 200 PC's (Would be increasing to 1500 in couple of months). D-Link switches are not configured yet. Also there is one Database server Report server and an application server. In near Future Application should be accessed by local users as well as remote users from internet via our web server. We do have a sharing server and all these servers databases and PC's are on single sub net. Required Network: All i do want is to secure my network from outside access and just allowing specific users via web application and they will be submitting there record for patient card and appointment facility by means of application and entering there record (on our database) but not violating our network resources. Secondly in house users also need to access the same application and also internet but they must have some unique identity and rights (i.e. Finance lab dept. peoples do have limited access to that application). Notes: Should i create V LAN or break sub nets. Having a firewall will solve my issues? is a router needed on these type of scenario's. Currently all the access are restricted from Linux Proxy. Thanks.

    Read the article

  • How to configure SVN server for my own project

    - by user1729952
    I work with a team on an Android project using Eclipse IDE, we need to use a version control and we need to access the repo remotely, I have no experience using or installing servers, a little experience using SVN on Windows, but I still have problems connecting to it remotely. I need to use no-ip.com to change my IP, however; I failed to make VisualSVN server to work with no-ip. What options do I have? The best thing is to get it work with Windows if not, I have another computer that is running Ubuntu 12.4.1, I have installed apache2svn on it trying to get it work, the svn is installed, I went through tutorials to configure accessing protocols, but I can't figure out how to access it remotely from another computer? Can someone tell me the steps I need to get this job done and I can do my search for each step? (Please explain each step as some keywords or phrases I may not be familiar with) EDIT: Also worth noting, that my company has a website hosted on a remote server, can we use it as a repo? and how? It's running Linux

    Read the article

  • What is wrong with my home network? (Routing and connection issues)

    - by David
    I have a corporate laptop that was provided to me by a client and I'm having some rather odd difficulties with it when I put the laptop on my home network. When I first brought the machine home it behaved like any other laptop. Once it was connected to the network it was assigned an IP address and I could remote into it just fine using the machine name. Lately though, whenever I put this laptop on my network I am not able to ping or RDP into the machine as the host name doesn't properly resolve. Additionally I'm able to see the device and it's assigned IP address clearly in my router firmware. This gets even more strange as now when I try to ping it's IP address listed in my router, I see that it's actually trying to ping my own machine (screenshot of this very odd event below). This has actually driven me crazy to the point that I have actually replaced my router (it was behaving oddly in other ways), and I'm continuing to have these problems. The above ping capture is from the new router. As far as network goes I am now currently using an NetGear R7000 Nighthawk and I haven't customized any of the networking settings in the router just yet (installed yesterday). I would appreciate any advice possible and would be happy to provide further diagnostic information. Networking isn't my strong suit, so I'm not even sure where to begin unraveling this thing.

    Read the article

  • I'm having trouble getting my server to appear online.

    - by JMRboosties
    Total newb question I'm sure. First I had installed WAMP (http://www.wampserver.com), and I was able to access my pages from other computers in my router network, and the virtual device used to debug Android programs (the purpose of my having a server). This functionality failed, however, at some point over these past few days. While my own browser displays the pages just fine, other computers, my Android phone (on our room's wifi), and my virtual device are no longer able to connect to my pages. I had not made any changes in the settings. I uninstalled WAMP and installed EasyPHP. However, the problem was not resolved. I know this is rather vague, but does anyone here have an idea of what may have happened? I forwarded both port 80 (I know its the default HTTP port, I did it just to be safe), and now port 8888 which EasyPHP uses. I turned my firewall on my hosting computer off for good measure. I cannot access my pages from neither remote computers or computers using my router. Any ideas you may have on how to resolve this would be awesome, thanks a lot. And if you need anymore info please tell me.

    Read the article

  • How do I change the default ftp folder in Mac OS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called Auto Update. Clicking an auto update button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • Looking for VCS wrapper that tracks system files changing across the whole *nix OS and sends diffs through email

    - by nextus
    I need some software that looks after custom directories across the whole OS (i.e. /etc) and alerting me if someone edit something file inside. Additionally, this tool must automatically commit and push changes into backup server, so I can easily determine when specific change in specific file was made. I'm using cvsbackup right now but I want to create or found something more modern. I think using git as VCS is a great idea. I could have local repository and easily revert changes in my configuration files. Furthermore, pushing changes to the remote repository would helps me to recover my configuration files when the server is fault. It doesn't seems difficult to write some wrapper around the git but there are a lot of problems. For example, I need to track custom directories: /usr/local/nginx/ and /etc/. So the destination point for my git repository is /. I don't need to track the other directories so I must to write overwhelming .gitignore rule: * !.gitignore !/etc/ !etc/* !/usr /usr/* !/usr/local /usr/local/* !/usr/local/nginx !/usr/local/nginx/* It's very daunting and prone to error. So it's maybe a good idea to create intermediate file that wrapper reads and converts to .gitignore format. Additionally, I don't want to keep my .git folder in / partition so I need to set appropriate GIT_DIR and GIT_WORK_TREE variables for git. Is there any ready to use tools for implementation this task? I don't found any but I don't believe that no one needs this feature.

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • Cannot open /dev/rfcomm1 : Host is down

    - by srj0408
    I am working on raspberry PI and on Bluetooth. I am using old raspberry pi kernel as the new one has got some bugs that were not resolved with respect to the bluez daemon. At present my kernel version is 3.6.11. I am using a USB bluetooth dongle and my sole purpose is to auto connect the bluetooth dongle when ever it is in range. For that i think i have to run a script in the backend on RPI that will keep on checking the existence of usb bluetooth dongle. I started from the very scratch. I installed bluez daemon using apt-get install bluetooth bluez utils blueman and then i used hciconfig which gives me that my bluetooth usb dongle is working fine. But when i did hcitool scan , it give me no device in range even though my Serial bluetooth Device was on. I wasn't able to find any device in vicinity. Also when i unplugged and plug the USB dongle again, i was able to scan the serial device , but when i repeat the process, i find the earlier condition of not finding any deice. I had find another useful link, but that need address of the bluetooth device that need to be connected. I want to automate this using hcitool scan, storing the output to the a file and then comparing it with already paired devices and their name. For that i need to figure out why hcitool scan is sometime working and sometime not. ? Can some one help me in figuring out why this is happening. Is there any problem on hardware side i.e Bluetooth dongle is buggy or i had some problem in bluez utils. Edit 1: While as of now, hcitool scan is giving me my remote device address but still i am getting the same issue of HOUST IS DOWN, '/dev/rfcomm1'. I am really not getting any idea of what to be done.

    Read the article

  • Problem with Google Web Toolkit Maven Plugin

    - by arreche
    I got an error following the PetClinic GWT application in less then 30 minutes Any idea? C:\Users\user\Desktop\petclinic>mvn -e gwt:run + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building petclinic [INFO] task-segment: [gwt:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing gwt:run [INFO] [aspectj:compile {execution: default}] [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date Downloading: http://repository.springsource.com/maven/bundles/release/org/codeha us/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.release (http://repository.sp ringsource.com/maven/bundles/release) Downloading: http://repository.springsource.com/maven/bundles/external/org/codeh aus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.external (http://repository.s pringsource.com/maven/bundles/external) Downloading: http://repository.springsource.com/maven/bundles/milestone/org/code haus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.milestone (http://repository. springsource.com/maven/bundles/milestone) Downloading: http://maven.springframework.org/milestone/org/codehaus/plexus/plex us-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository spring-maven-milestone (http://maven.springframework.org/mileston e) Downloading: http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven/org /codehaus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/g wt/maven) Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-components /1.1.6/plexus-components-1.1.6.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.codehaus.plexus:plexus-components:pom:1.1.6 Reason: Cannot find parent: org.codehaus.plexus:plexus for project: org.codehaus .plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexus-compo nents:pom:1.1.6 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Unable to get dependency information: Unable to read the metadata file for artifact 'org.codehaus.plexus :plexus-compiler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for pr oject: org.codehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus. plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:711) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.artifact.resolver.ArtifactResolutionException: Unabl e to get dependency information: Unable to read the metadata file for artifact ' org.codehaus.plexus:plexus-compiler-api:jar': Cannot find parent: org.codehaus.p lexus:plexus for project: org.codehaus.plexus:plexus-components:pom:1.1.6 for pr oject org.codehaus.plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:430) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.collect(D efaultArtifactCollector.java:74) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:316) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:304) at org.apache.maven.plugin.DefaultPluginManager.ensurePluginContainerIsC omplete(DefaultPluginManager.java:835) at org.apache.maven.plugin.DefaultPluginManager.getConfiguredMojo(Defaul tPluginManager.java:647) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:468) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.artifact.metadata.ArtifactMetadataRetrievalException : Unable to read the metadata file for artifact 'org.codehaus.plexus:plexus-comp iler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for project: org.c odehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexu s-components:pom:1.1.6 at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:200) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edArtifact(MavenMetadataSource.java:94) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:387) ... 24 more Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent : org.codehaus.plexus:plexus for project: org.codehaus.plexus:plexus-components: pom:1.1.6 for project org.codehaus.plexus:plexus-components:pom:1.1.6 at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(Def aultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromReposito ry(DefaultMavenProjectBuilder.java:255) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:163) ... 26 more Caused by: org.apache.maven.project.InvalidProjectModelException: Parse error re ading POM. Reason: expected START_TAG or END_TAG not TEXT (position: TEXT seen . ..<role>Developer</role>\n 6878/?\r</... @163:16) for project org.codehaus .plexus:plexus at C:\Users\user\.m2\repository\org\codehaus\plexus\plexus\1.0.8\ plexus-1.0.8.pom at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1610) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1571) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepo sitory(DefaultMavenProjectBuilder.java:562) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1392) ... 31 more Caused by: org.codehaus.plexus.util.xml.pull.XmlPullParserException: expected ST ART_TAG or END_TAG not TEXT (position: TEXT seen ...<role>Developer</role>\n 6878/?\r</... @163:16) at hidden.org.codehaus.plexus.util.xml.pull.MXParser.nextTag(MXParser.ja va:1095) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseDeveloper(MavenXp p3Reader.java:1389) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseModel(MavenXpp3Re ader.java:1944) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.read(MavenXpp3Reader.j ava:3912) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1606) ... 34 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11 seconds [INFO] Finished at: Fri May 21 20:28:23 BST 2010 [INFO] Final Memory: 45M/205M [INFO] ------------------------------------------------------------------------

    Read the article

  • Large Object Heap Fragmentation

    - by Paul Ruane
    The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before. The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH: 0:000> !DumpHeap 000000005b5b1000 000000006351da10 Address MT Size ... 000000005d4f92e0 0000064280c7c970 16147872 000000005e45f880 00000000001661d0 1901752 Free 000000005e62fd38 00000642788d8ba8 1056 <-- 000000005e630158 00000000001661d0 5988848 Free 000000005ebe6348 00000642788d8ba8 1056 000000005ebe6768 00000000001661d0 6481336 Free 000000005f214d20 00000642788d8ba8 1056 000000005f215140 00000000001661d0 7346016 Free 000000005f9168a0 00000642788d8ba8 1056 000000005f916cc0 00000000001661d0 7611648 Free 00000000600591c0 00000642788d8ba8 1056 00000000600595e0 00000000001661d0 264808 Free ... Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow. Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine: 0:015> !DumpObj 000000005e62fd38 Name: System.Object[] MethodTable: 00000642788d8ba8 EEClass: 00000642789d7660 Size: 1056(0x420) bytes Array: Rank 1, Number of elements 128, Type CLASS Element Type: System.Object Fields: None The elements of the object array are all strings and the strings are recognisable as from our application code. Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight). So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects? Thanks in advance. Update 1 Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead. The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know... Update 2 Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this: static void Main() { const int ITERATIONS = 100000; for (int index = 0; index < ITERATIONS; ++index) { string str = "NonInterned" + index; Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue."); Console.In.ReadLine(); for (int index = 0; index < ITERATIONS; ++index) { string str = string.Intern("Interned" + index); Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue?"); Console.In.ReadLine(); } The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not. In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS: 0:000> .loadby sos mscorwks 0:000> !EEHeap -gc Number of GC Heaps: 1 generation 0 starts at 0x00f7a9b0 generation 1 starts at 0x00e79c3c generation 2 starts at 0x00b21000 ephemeral segment allocation context: none segment begin allocated size 00b20000 00b21000 010029bc 0x004e19bc(5118396) Large object heap starts at 0x01b21000 segment begin allocated size 01b20000 01b21000 01b8ade0 0x00069de0(433632) Total Size 0x54b79c(5552028) ------------------------------ GC Heap Size 0x54b79c(5552028) Taking a dump of the LOH segment reveals the pattern I saw in the leaking application: 0:000> !DumpHeap 01b21000 01b8ade0 ... 01b8a120 793040bc 528 01b8a330 00175e88 16 Free 01b8a340 793040bc 528 01b8a550 00175e88 16 Free 01b8a560 793040bc 528 01b8a770 00175e88 16 Free 01b8a780 793040bc 528 01b8a990 00175e88 16 Free 01b8a9a0 793040bc 528 01b8abb0 00175e88 16 Free 01b8abc0 793040bc 528 01b8add0 00175e88 16 Free total 1568 objects Statistics: MT Count TotalSize Class Name 00175e88 784 12544 Free 793040bc 784 421088 System.Object[] Total 1568 objects Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long. So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR. In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • Combining HBase and HDFS results in Exception in makeDirOnFileSystem

    - by utrecht
    Introduction An attempt to combine HBase and HDFS results in the following: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir ectory, retries exhausted 2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException: Exception in makeDirOnFileSystem at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:136) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi leSystem.java:428) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst emLayout(MasterFileSystem.java:148) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst em.java:133) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j ava:572) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:224) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:204) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi ssion(FSPermissionChecker.java:149) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4891) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4873) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce ss(FSNamesystem.java:4847) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS Namesystem.java:3192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames ystem.java:3156) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst em.java:3137) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN odeRpcServer.java:669) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497 0) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal l(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1438) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce ption.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc eption.java:57) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy stem.java:545) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:129) ... 6 more while configuration and system settings are as follows: [vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/ Found 1 items -rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u buntu-14.04-desktop-amd64.iso [vagrant@localhost hadoop-hdfs]$ /etc/hadoop/conf/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8020</value> </property> </configuration> /etc/hbase/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> /etc/hadoop/conf/hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.data.dir</name> <value>/tmp/hellodatanode</value> </property> </configuration> NameNode directory permissions [vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache total 8 -rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current [vagrant@localhost hadoop-hdfs]$ HMaster is able to start if fs.defaultFS property has been commented in core-site.xml NameNode is listening [vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070 tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST EN off (0.00/0/0) tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA BLISHED off (0.00/0/0) and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp. Question How to solve makeDirOnFileSystem exception and let HBase connect to HDFS?

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • Can't start httpd 2.4.9 with self-signed SSL certificate

    - by Smollet
    I cannot start the httpd 2.4.9 (tried 2.4.x too) on CentOS 6.5 with the simplest SSL config possible. The openssl version installed on the machine is OpenSSL 1.0.1e-fips 11 Feb 2013 (I've upgraded it using 'yum update' to the latest patched version as well) I have compiled and installed the httpd 2.4.9 using the following commands: ./configure --enable-ssl --with-ssl=/usr/local/ssl/ --enable-proxy=shared --enable-proxy_wstunnel=shared --with-apr=apr-1.5.1/ --with-apr-util=apr-util-1.5.3/ make make install Now I'm generating the default self-signed certificate as described in the CentOS HowTo: openssl genrsa -out ca.key 2048 openssl req -new -key ca.key -out ca.csr openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt cp ca.crt /etc/pki/tls/certs cp ca.key /etc/pki/tls/private/ca.key cp ca.csr /etc/pki/tls/private/ca.csr Here is my httpd-ssl.conf file: Listen 443 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLPassPhraseDialog builtin SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)" SSLSessionCacheTimeout 300 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "/usr/local/apache2/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch "MSIE [2-5]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog "/usr/local/apache2/logs/ssl_request_log" \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> when I start httpd using bin/apachectl -k start I get following errors in the error_log: Wed Jun 04 00:29:27.995654 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:29:27.995726 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.995863 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996111 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:29:27.996122 2014] [ssl:info] [pid 24021:tid 139640404293376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:29:27.996209 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.996280 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996295 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:29:27.996303 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: DH PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996308 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: EC PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996318 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:29:27.996321 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I then try to generate missing DH PARAMETERS and EC PARAMETERS: openssl dhparam -outform PEM -out dhparam.pem 2048 openssl ecparam -out ec_param.pem -name prime256v1 cat dhparam.pem ec_param.pem >> /etc/pki/tls/certs/ca.crt And it mitigates the error but the next comes out: [Wed Jun 04 00:34:05.021438 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:34:05.021487 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.021874 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022050 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:34:05.022066 2014] [ssl:info] [pid 24089:tid 140719371077376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:34:05.022285 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1016): AH02540: Custom DH parameters (2048 bits) for 192.168.9.128:443 loaded from /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022389 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1030): AH02541: ECDH curve prime256v1 for 192.168.9.128:443 specified in /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022397 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.022464 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022478 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:34:05.022488 2014] [ssl:emerg] [pid 24089:tid 140719371077376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:34:05.022491 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I have tried to generate the simple certificate/key pair exactly as described in the httpd docs Unfortunately, I still get exact same errors as above. I've seen a bug report with the similar issue: https://issues.apache.org/bugzilla/show_bug.cgi?id=56410 But the openssl version I have is reported as working there. I've also tried to apply the patch from the report as well as build the latest 2.4.x branch with no success, I get the same errors as above. I have also tried to create a short chain of certificates and set the root CA certificate using SSLCertificateChainFile directive. That didn't help either, I get exact same errors as above. I'm not interested in setting up hardened security, etc. The only thing I need is to start httpd with the simplest SSL config possible to continue testing proxy config for the mod_proxy_wstunnel Had anybody encountered and solved this issue? Is my sequence for creating a self-signed certificate incorrect? I'd appreciate any help very much!

    Read the article

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • Poor upload/download speed on 2 x ADSL lines into a Cisco 2621XM

    - by 2020mobile
    Hi, Sorry never been on this site before so I apologise if not the right section or even forum. I have users complaining of very slow internetn connectivity on site and have checked with our ISP who have said that the line is testing at 8mb. We have 2 x BT lines that have our ISP broadand on them. Both lines go into a Cisco 2600 series router that then has a PIX firewall off that. Connectivity is successful just gone really slow and unable to download anything. Config is below: version 12.3 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname ROUTER-ADSL-INTERNET ! logging buffered 16384 informational enable secret xxx enable password xxx ! username xxx username xxx clock summer-time UK recurring last Sun Mar 1:00 last Sun Oct 1:00 aaa new-model ! ! aaa authentication login default local aaa authorization exec default local aaa session-id common ip subnet-zero no ip source-route ! ! ! ip audit notify log ip audit po max-events 100 no ip bootp server ip name-server 213.208.106.212 no mpls ldp logging neighbor-changes no ftp-server write-enable ! ! ! ! ! ! ! ! ! ! no voice hpi capture buffer no voice hpi capture destination ! ! ! ! ! ! ! ! interface ATM0/0 description 01270 111111 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/0 ip address 82.133.32.9 255.255.255.248 shutdown speed 100 full-duplex no cdp enable ! interface ATM0/1 description 01270 222222 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/1 ip address 217.146.115.49 255.255.255.240 duplex auto speed auto no cdp enable ! interface Dialer0 ip address 217.146.115.250 255.255.255.248 encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap callin ppp chap hostname [email protected] ppp chap password 7 xxxxx ppp multilink ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! no logging trap access-list 10 permit 217.146.115.50 access-list 10 permit 82.133.32.10 access-list 10 deny any access-list 22 permit 217.146.115.50 access-list 22 permit 217.206.239.86 access-list 22 permit 82.133.32.10 access-list 22 deny any dialer-list 1 protocol ip permit no cdp run ! ! snmp-server community xxxxxx RO 10 snmp-server enable traps tty radius-server authorization permit missing Service-Type ! ! ! ! ! ! line con 0 exec-timeout 5 0 password 7 xxxxxx line aux 0 no exec line vty 0 4 access-class 22 in exec-timeout 5 0 password 7 xxxxxx transport input telnet ssh transport output none line vty 5 15 password 7 xxxxxx transport input telnet ssh ! ntp clock-period 17180095 ntp server 130.88.200.98 ! ! end Now my knowledge is very limited but ISP have said that while the lines are bonded each needs a seperate login as they've recently changed their L2TP router and that enforces the use of seperate logins - when the lines were configured we were given two logins. So, my question is what changes do I need to make to the config in order to get this working? it was ok before their change and I do have another login :- 01270 111111 - [email protected] 01270 222222 - [email protected] Apologies for the long email and thanks for taking the time to read it. Any more info I can provide please let me know. Thanks,

    Read the article

  • All Xen domU LVM volumes corrupt after reboot

    - by zcs
    I'm running a Debian Squeeze dom0, and after rebooting it all 7 of my domUs have data corruption. Each is setup as ext3 partition directly on a separate lvm2 volume. None of the lvm volumes will mount; all have bad superblocks. I've tried e2fsck with each superblock to no avail. What else can I try? Each domU has two LVM volumes connected to it, one for the disk and one for swap. The disk is mounted at root, formatted as a normal ext3 partition as a xen-blk device. The volumes are never mounted outside of the guest OS. I'm running Ubuntu 11.04 using the instructions here. I'm not sure that they didn't shutdown properly, all I know is they were corrupt after I issues a clean 'reboot' on the dom0. Here's a sample Xen config file; the rest are the same except for name, vcpus, memory, vif and disk. name = 'load1' vcpus = 2 memory = 512 vif = ['bridge=prbr0', 'bridge=eth0'] disk = ['phy:/dev/VolGroup00/load1-disk,xvda,w','phy:/dev/VolGroup00/load1-swap,xvdb,w'] #============================================================================ # Debian Installer specific variables def check_bool(name, value): value = str(value).lower() if value in ('t', 'tr', 'tru', 'true'): return True return False global var_check_with_default def var_check_with_default(default, var, val): if val: return val return default xm_vars.var('install', use='Install Debian, default: false', check=check_bool) xm_vars.var("install-method", use='Installation method to use "cdrom" or "network" (default: network)', check=lambda var, val: var_check_with_default('network', var, val)) # install-method == "network" xm_vars.var("install-mirror", use='Debian mirror to install from (default: http://archive.ubuntu.com/ubuntu)', check=lambda var, val: var_check_with_default('http://archive.ubuntu.com/ubuntu', var, val)) xm_vars.var("install-suite", use='Debian suite to install (default: natty)', check=lambda var, val: var_check_with_default('natty', var, val)) # install-method == "cdrom" xm_vars.var("install-media", use='Installation media to use (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-cdrom-device", use='Installation media to use (default: xvdd)', check=lambda var, val: var_check_with_default('xvdd', var, val)) # Common options xm_vars.var("install-arch", use='Debian mirror to install from (default: amd64)', check=lambda var, val: var_check_with_default('amd64', var, val)) xm_vars.var("install-extra", use='Extra command line options (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-installer", use='Debian installer to use (default: network uses install-mirror; cdrom uses /install.ARCH)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-kernel", use='Debian installer kernel to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-ramdisk", use='Debian installer ramdisk to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.check() if not xm_vars.env.get('install'): bootloader="/usr/sbin/pygrub" elif xm_vars.env['install-method'] == "network": import os.path print "Install Mirror: %s" % xm_vars.env['install-mirror'] print "Install Suite: %s" % xm_vars.env['install-suite'] if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = xm_vars.env['install-mirror']+"/dists/"+xm_vars.env['install-suite'] + \ "/main/installer-"+xm_vars.env['install-arch']+"/current/images" print "Installer: %s" % installer print print "WARNING: Installer kernel and ramdisk are not authenticated." print if xm_vars.env.get('install-kernel'): kernelurl = xm_vars.env['install-kernel'] else: kernelurl = installer + "/netboot/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskurl = xm_vars.env['install-ramdisk'] else: ramdiskurl = installer + "/netboot/xen/initrd.gz" import urllib class MyUrlOpener(urllib.FancyURLopener): def http_error_default(self, req, fp, code, msg, hdrs): raise IOError("%s %s" % (code, msg)) urlopener = MyUrlOpener() try: print "Fetching %s" % kernelurl kernel, _ = urlopener.retrieve(kernelurl) print "Fetching %s" % ramdiskurl ramdisk, _ = urlopener.retrieve(ramdiskurl) except IOError, _: raise elif xm_vars.env['install-method'] == "cdrom": arch_path = { 'i386': "/install.386", 'amd64': "/install.amd" } if xm_vars.env['install-media']: print "Install Media: %s" % xm_vars.env['install-media'] else: raise OptionError("No installation media given.") if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = arch_path[xm_vars.env['install-arch']] print "Installer: %s" % installer if xm_vars.env.get('install-kernel'): kernelpath = xm_vars.env['install-kernel'] else: kernelpath = installer + "/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskpath = xm_vars.env['install-ramdisk'] else: ramdiskpath = installer + "/xen/initrd.gz" disk.insert(0, 'file:%s,%s:cdrom,r' % (xm_vars.env['install-media'], xm_vars.env['install-cdrom-device'])) bootloader="/usr/sbin/pygrub" bootargs="--kernel=%s --ramdisk=%s" % (kernelpath, ramdiskpath) print "From CD" else: print "WARNING: Unknown install-method: %s." % xm_vars.env['install-method'] if xm_vars.env.get('install'): # Figure out command line if xm_vars.env['install-extra']: extras=[xm_vars.env['install-extra']] else: extras=[] # Reboot will just restart the installer since this file is not # reparsed, so halt and restart that way. extras.append("debian-installer/exit/always_halt=true") extras.append("--") extras.append("quiet") console="hvc0" try: if len(vfb) >= 1: console="tty0" except NameError, e: pass extras.append("console="+ console) extra = str.join(" ", extras) print "command line is \"%s\"" % extra root There are two LVM logical volumes connected to each VM. Here's the fdisk -l output for the disk volume: Disk /dev/VolGroup00/VMNAME-disk: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029c01 Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-disk1 1 1045 8386560 83 Linux And the swap volume: Disk /dev/VolGroup00/VMNAME-swap: 536 MB, 536870912 bytes 37 heads, 35 sectors/track, 809 cylinders Units = cylinders of 1295 * 512 = 663040 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004faae Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-swap1 2 809 522240 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(1, 21, 19) Partition 1 has different physical/logical endings: phys=(65, 36, 35) logical=(808, 4, 28)

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >