Search Results

Search found 9750 results on 390 pages for 'sys debug'.

Page 113/390 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • Online resizing of kvm guest root filesystem?

    - by Bittrance
    I have a Linux guest that uses an LVM volume directly as root file system (that is, there is no partition table). libvirt config looks thus: <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <kernel>/boot/vmlinuz-X.Y.Z.el6.x86_64</kernel> <initrd>/boot/initramfs-X.Y.Z.el6.x86_64.img</initrd> <cmdline>console=ttyS0 root=/dev/vda</cmdline> <boot dev='hd'/> </os> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/vg/guest'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> From inside the guest: $ mount /dev/vda on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Is it possible to resize the guest's root partition without rebooting the guest? Just doing lvextend on the host and resize2fs from the guest does not seem to be enough.

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • Diagnosing a BSOD involving USB

    - by David Ebbo
    [Running Win7 Ultimate 64 bit] My new HP Pavilion Elite HPE-450t has been plagued by BSDO crashes since I got it about 5 weeks ago. The crashes are somewhat rare, sometimes not occurring for 3 or 4 days. I have spent a lot of time trying to isolate the device that could be at fault, but I have seen crashes with only the keyboard and mouse plugged in (as USB devices), and I tried two sets of keyboard/mouse, so I'm running out of ideas. :( The WhoCrashed tool gave this info about my latest BSOD: crash dump file: C:\Windows\Minidump\121310-11887-01.dmp This was probably caused by the following module: usbport.sys (USBPORT+0x2DE4E) Bugcheck code: 0xFE (0x5, 0xFFFFFA8008F571A0, 0x80863B34, 0xFFFFFA80092F2510) Error: BUGCODE_USB_DRIVER file path: C:\Windows\system32\drivers\usbport.sys product: Microsoft® Windows® Operating System company: Microsoft Corporation description: USB 1.1 & 2.0 Port Driver Bug check description: This indicates that an error has occurred in a Universal Serial Bus (USB) driver. The crash took place in a standard Microsoft module. Your system configuration may be incorrect. Possibly this problem is caused by another driver on your system which cannot be identified at this time. I looked at http://msdn.microsoft.com/en-us/library/ff560407(VS.85).aspx, and for Parameter1 = 0x5, it says "A hardware failure has occurred due to a bad physical address found in a hardware data structure. This is not due to a driver bug". Should I conclude that it's a hardware issue in the machine itself, rather than a bad USB driver or USB device? Here is the MiniDump, in case someone can get more info out of it: http://ewt52q.blu.livefilestore.com/y1peS4Ce8nSK1SXghzMDoxDWXlaEu-EKCJsv25y8y5DXXIUzZ9U0_tYgFJXd939fykwa0zRmx98IW0PYG18GioqKAuARYjtspSA/121310-11887-01.dmp?download&psid=2

    Read the article

  • Change authentication wifi WPA to WEP on Windows 8

    - by Kites
    I use netsh wlan set hostednetwork mode=allow ssid=Kites key=phambaoViet netsh wlan start hostednetwork To share WiFi from LAN network on laptop (Windows 8) to my other device. When I show information netsh wlan show hostednetwork the supported authentication is WPA. My device support authentication WEP only. How can I change the authentication to WEP? Infomation: Interface name: Wi-Fi Driver : Qualcomm Atheros AR9002WB-1NG Wireless Network A dapter Vendor : Qualcomm Atheros Communications Inc. Provider : Microsoft Date : 03/07/2012 Version : 3.0.0.130 INF file : C:\Windows\INF\netathrx.inf Files : 2 total C:\Windows\system32\DRIVERS\athrx.sys C:\Windows\system32\drivers\vwifibus.sys Type : Native Wi-Fi Driver Radio types supported : 802.11b 802.11g 802.11n FIPS 140-2 mode supported : Yes 802.11w Management Frame Protection supported : Yes Hosted network supported : Yes Authentication and cipher supported in infrastructure mode: Open None Open WEP-40bit Open WEP-104bit Open WEP WPA-Enterprise TKIP WPA-Personal TKIP WPA2-Enterprise TKIP WPA2-Personal TKIP Vendor defined TKIP WPA2-Enterprise Vendor defined Vendor defined Vendor defined WPA-Enterprise CCMP WPA-Personal CCMP WPA2-Enterprise CCMP Vendor defined CCMP WPA2-Enterprise Vendor defined Vendor defined Vendor defined WPA2-Personal CCMP Vendor defined Vendor defined Authentication and cipher supported in ad-hoc mode: Open None Open WEP-40bit Open WEP-104bit Open WEP WPA2-Personal CCMP Vendor defined Vendor defined

    Read the article

  • FTP issues with Windows 2003 box and Filezilla

    - by vanhornRF
    We've set up a Windows 2003 server with IIS 6 and the FTP is Filezilla. I'm not a sys admin by any means and neither is my other developer. I'm trying to connect to FTP on a Mac with a Cyberduck FTP client. I'm running OSX 10.6.3. The problem is that I can connect to the FTP address, but when I do it will hang for a minute or two trying to list the directories. This address is pointed to the webroot folder so it's only trying to list the pertinent folders for the site we're working on. Eventually after a minute or two it will go through and list everything and then if you try and open a file or another folder, it will hang again and then eventually list the folders. My question: Is there some stupid default setting we're missing? Is this a common occurrence? Like I said I'm not a sys admin at all and I'm sure I'm missing some valuable questions for anyone that reads this, so please fire away if you can help and you need more info, I'll do my best to provide it. Thanks!

    Read the article

  • gcc built by crosstool-ng gives undefined reference

    - by netvope
    I've successfully built a toolchain using crosstool-ng with the default configuration named x86_64-unknown-linux-gnu. The documentation says: Using the toolchain is as simple as adding the toolchain's bin directory in your PATH, such as: export PATH="${PATH}:/your/toolchain/path/bin" and then using the target tuple to tell the build systems to use your toolchain: ./configure --target=your-target-tuple or make CC=your-target-tuple-gcc or make CROSS_COMPILE=your-target-tuple- and so on... I followed the instructions and attempted to build GNU tar (tar-1.25.tar.bz2) with the toolchain. The commands ./configure --target=x86_64-unknown-linux-gnu and make CROSS_COMPILE=x86_64-unknown-linux-gnu- do not work (the build will succeed, but it uses the host system's gcc). The command make CC=x86_64-unknown-linux-gnu-gcc works, but in the very last step when it tries to link, it returns errors like this: compare.o: In function `openat': /dev/shm/x-tools/x86_64-unknown-linux-gnu/x86_64-unknown-linux-gnu/sys-root/usr/include/bits/fcntl2.h:134: undefined reference to `__openat_2' What could be the problem? Was the toolchain not properly setup? Perhaps x86_64-unknown-linux-gnu-gcc is using the header files from the host system but could not find the libraries in the target's sys-root?

    Read the article

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • RHEL Java Application returns "No space left on device" but only 3% used

    - by FiveO
    My Java Application returns following Exception when saving a new file in /opt/wso2 on a CentOS 6.4: Caused by java.io.FileNotFoundException: ... (No space left on device) Caused by: java.io.FileNotFoundException: /opt/wso2/FrameworkFiles/trk_2014062500042488825_TRCK_PatfallHospis_pFromHospis_66601fb3-a03c-4149-93c3-6892e0a10fea.txt (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:99) at com.avintis.esb.framework.adapter.wso2.FrameworkAdapterWSO2.sendMessages(FrameworkAdapterWSO2.java:634) ... 23 more But when I run df -a I can see that the partition still has plenty of space available: [root@stzsi466 wso2]# df -a Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_stzsi466-lv_root 12054824 2116092 9326380 19% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys devpts 0 0 0 - /dev/pts tmpfs 4030764 0 4030764 0% /dev/shm /dev/sda1 495844 53858 416386 12% /boot /dev/sdb1 51605436 1424288 47559744 3% /opt/wso2 none 0 0 0 - /proc/sys/fs/binfmt_misc [root@stzsi466 ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg_stzsi466-lv_root 765536 45181 720355 6% / tmpfs 1007691 1 1007690 1% /dev/shm /dev/sda1 128016 44 127972 1% /boot /dev/sdb1 3276800 6137 3270663 1% /opt/wso2 What is the problem here? Is it caused by the Java on CentOS 6.4? I have another server running Redhat REHL 6.4 and all works fine - same Java etc. Does anyone know of this problem?

    Read the article

  • Celery daemon as a Ubuntu service does not consume tasks while running from terminal does

    - by Guy
    On Ubuntu 11.10, I have to issue python tasks from django using celery. I'm currently testing on the same machine but eventually the celery worker should run on a remote machine. django uses the following settings: BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_VHOST = "/my_vhost" BROKER_USER = "celery" BROKER_PASSWORD = "celery" I can also see my task queued in http://localhost:55672/#/queues the celery daemon uses the following configuration (celeryconfig.py): BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_USER = "celery" BROKER_PASSWORD = "celery" BROKER_VHOST = "/my_vhost" CELERY_RESULT_BACKEND = "amqp" import os import sys sys.path.append(os.getcwd()) CELERY_IMPORTS = ("tasks", ) running celeryd -l info works well and now I want to run it as a service. I've followed the instructions from http://ask.github.com/celery/cookbook/daemonizing.html and now I'm trying to run it using: sudo /etc/init.d/celeryd start But the message is not being consumed, no error in the celery log either. /etc/default/celeryd CELERYD_NODES="w1" CELERYD_CHDIR="/path/to/django/project" CELERYD_OPTS="--time-limit=300 --concurrency=1" CELERY_CONFIG_MODULE="celeryconfig" # %n will be replaced with the nodename. CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_PID_FILE="/var/run/celery/%n.pid" # Workers should run as an unprivileged user. CELERYD_USER="celery" CELERYD_GROUP="celery" I've also created user celery in Ubuntu not sure if its necessary. Any help will be appreciated, Thanks, Guy

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Ubuntu "No space left on device" for /home, df shows 100% full, ds shows much, much less

    - by Jon Cram
    On an Ubuntu 12.04 server, normal users can no longer create or add to files in /home, encountering a "No space left on device" error. The /home directory has a capacity of 1.7 terabytes and as far as I can tell is nowhere near full in terms of actual data stored or inodes used. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/md2 1.0T 18G 955G 2% / udev 7.7G 4.0K 7.7G 1% /dev tmpfs 3.1G 320K 3.1G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.7G 0 7.7G 0% /run/shm cgroup 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/md3 1.7T 1.7T 0 100% /home /dev/md1 496M 45M 426M 10% /boot /home indeed looks rather full. du -hs /home suggests otherwise: 1.4G /home There appears no inode issue - df -i: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 67108864 75334 67033530 1% / udev 2013497 527 2012970 1% /dev tmpfs 2015816 440 2015376 1% /run none 2015816 2 2015814 1% /run/lock none 2015816 1 2015815 1% /run/shm cgroup 2015816 9 2015807 1% /sys/fs/cgroup /dev/md3 113909760 105981 113803779 1% /home /dev/md1 131072 239 130833 1% /boot I recently deleted a many gigabytes of application cache and log data from /home, however this was in the tens of gigabytes at best and nowhere near the capcity of /home. Update 1: du -hs --apparent-size /home 1.2G /home du -hs /home 1.4G /home What might be going on here?

    Read the article

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • MBR seems to be gone

    - by bobobobo
    So, horror story for everyone. I bought two spanking new HDD's. MM!! Gbitage. I removed all my old HDD's, physically labelled them, and was preparing to install all new HDD's (fresh sys install included!) To make sure what HDD was what, I popped each OLD HDD (data filleD!) into a Thermaltake Blacx toaster.. surprisingly BOTH couldn't be read. I didn't have static on my hands! I'm certain of it. I touched metal, touched wood, before beginning this all. Thinking that was strage, I hauled up the new sys, installed Win XP (of course!) on the new HDD, and now the two OLD HDD's (data filled!) that were entered into the toaster cannot be read. And they had tons of data on them. I read about MBR's being nuked and it sounds like that is what it is. But I'm at a loss what to do. There are so many MBR recovery programs out there, I kind of feel overwhelmed. I don't want to lose my data by just pikcing one, yet it seems so close within reach, I'm not panicking anymore.. Anybody have a play by play that I could follow? I just don't want to spend $900 on data recovery centers if I can do this myself..

    Read the article

  • Ubuntu 10.04 on virtualbox gives error: Target filesystem doesn't have /sbin/init \ No init found. Try passing init= bootarg

    - by Philip
    I'm a linux newbie and the only reason I have it installed is so I can stop having Windows incompatibility issues with Ruby on Rails. Having said that, it sure has been nice, and much faster, and I don't think I'll be doing any Winrails stuff anytime soon. So I created a virtualmachine using virtualbox and have had ubuntu on it for the last 3 weeks. Recently ubuntu asked if it could update a few things, I clicked 'ok'. Now it won't boot and I get this error: *mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory ... Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg BusyBox v1.13.3... (initramfs) _ * So I cruised the forums and there are a variety of solutions, but they all have to do with booting from the live cd. (which I assume is the ISO image I used to install ubuntu in the first place). But when I boot from that CD, it just hangs on the ubuntu screen, and the little dots keep cycling white to red, but it hung there for an hour so I think it was stuck. Not sure what I can do; can I do anything from the busybox shell (or whatever that is) to fix things? The thing is, it took about 10 hours to get everything the way I needed with all the gems and whatnot. And I didn't really write down what I tweaked, and I'm middle aged, so all that information has leaked out by now and I don't want to do it again. I'd really like to repair my existing install. One question you might have is, is there something wrong with the ISO? I don't think so, because I made a new virtual machine and used that same iso file to install a fresh ubuntu. Any help much appreciated. Phil

    Read the article

  • How can I start hostednetwork on Windows 7?

    - by Pirozek
    When I type in admin console command to start hostednetwork netsh wlan start hostednetwork it gives me this: The hosted network couldn't be started. The group or resource is not in the correct state to perform the requested operation. There is a hotfix from Microsoft but it didn't help me. Any advice? C:\Users\Pirozek>netsh wlan show driver Interface name: Wireless Network Connection 3 Driver : D-Link AirPlus DWL-G520 Wireless PCI Adapter(rev .B) Vendor : Atheros Communications Inc. Provider : Atheros Communications Inc. Date : 8.7.2009 Version : 8.0.0.171 INF file : C:\Windows\INF\oem108.inf Files : 2 total C:\Windows\system32\DRIVERS\athrx.sys C:\Windows\system32\drivers\vwifibus.sys Type : Native Wi-Fi Driver Radio types supported : 802.11b 802.11g FIPS 140-2 mode supported : Yes Hosted network supported : Yes Authentication and cipher supported in infrastructure mode: Open None Open WEP-40bit Shared WEP-40bit Open WEP-104bit Shared WEP-104bit Open WEP Shared WEP WPA-Enterprise TKIP WPA-Personal TKIP WPA2-Enterprise TKIP WPA2-Personal TKIP Vendor defined TKIP WPA2-Enterprise Vendor defined Vendor defined Vendor defined WPA-Enterprise CCMP WPA-Personal CCMP WPA2-Enterprise CCMP Vendor defined CCMP WPA2-Enterprise Vendor defined Vendor defined Vendor defined WPA2-Personal CCMP Authentication and cipher supported in ad-hoc mode: Open None Open WEP-40bit Open WEP-104bit Open WEP WPA2-Personal CCMP

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • HPET missing from available clocksources on CentOS

    - by squareone
    I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet cat /sys/devices/system/clocksource/clocksource0/current_clocksource: tsc cat /sys/devices/system/clocksource/clocksource0/available_clocksource: tsc jiffies What is even more confusing, is that I have about a dozen other machines that utilize the same kernel .config, and can use HPET fine. I fear it is a hardware issue, but would appreciate any advice or help with getting HPET available. Thanks in advance!

    Read the article

  • The Web Hosting Connundrum for "not quite" developers

    - by saltcod
    Hey all, Apologies if this post feels like its been covered elsewhere, but I don't think it has. I've been down a winding web hosting road. To date, I've tried: Joyent, Media Temple, Bluehost, Hostgator, and finally Linode. The reason for switching are likely obvious to everyone: speed. With the exception of the lightening fast Linode, all of the shared hosts are absolutely sloooow. What do do when you're not really a "developer" While I'v grown addicted to the speed of Linode, I really don't feel like its where I should be. I have this nagging feeling in the back of my mind that one of these days (likely soon), I'm going to run into something that i won't be able to figure out and i'll have days worth of downtime. Just the other day, for example, I realized that one of my domains wasn't sending emails. After 4(!) hours looking into the problem, I still can't get sendmail or postfix to work. Four hours!! I want to be a Drupal expert, not a Ubuntu expert That's really the heart of my problem: I spend way too much time learning Ubuntu's ins-and-outs, and not nearly enough time working on Drupal. So here goes: Is there a web host out there anywhere that offers the speed of Linode, but will let me focus on Drupal instead of sys-admin-ing? Thanks! [ I know, I know. There are going to be lots of people who read this saying - "just learn Ubuntu like a real developer". And I get that. I do. But when I work full-time and try and develop some of these sites in my evenings and weekends, I'm really feeling like the sys-admin stuff gets in the way.

    Read the article

  • Disk operations in windows 7 are slow

    - by Skadlig
    My computer started lagging last Sunday. I tried to reboot it and it failed. Trying to boot into failsafe mode takes around two hours. It mainly freezes on two files: scsiport.sys and classpnp.sys When it finally has started all disc operations are really slow. When it has run for a while it goes faster, probably due to data moved into RAM instead. It froze on an other file before that was associated with Avast but uninstalling it didn't really help. A critical windows update was installed on Sunday but rolling back the update didn't help. I had a guess about the sound card but disabling the sound card drivers also didn’t help. I have an inkling of an idea that it might be Intel rapid storage technology that might be acting up but it doesn't allow me to reinstall it from failsafe mode and I haven't been able to log into normal mode for a while. I would appreciate suggestions regarding how to get into normal mode again and/or what can be the root cause.

    Read the article

  • Lots of first chance Microsoft.CSharp.RuntimeBinderExceptions thrown when dealing with dynamics

    - by Orion Edwards
    I've got a standard 'dynamic dictionary' type class in C# - class Bucket : DynamicObject { readonly Dictionary<string, object> m_dict = new Dictionary<string, object>(); public override bool TrySetMember(SetMemberBinder binder, object value) { m_dict[binder.Name] = value; return true; } public override bool TryGetMember(GetMemberBinder binder, out object result) { return m_dict.TryGetValue(binder.Name, out result); } } Now I call it, as follows: static void Main(string[] args) { dynamic d = new Bucket(); d.Name = "Orion"; // 2 RuntimeBinderExceptions Console.WriteLine(d.Name); // 2 RuntimeBinderExceptions } The app does what you'd expect it to, but the debug output looks like this: A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll 'ScratchConsoleApplication.vshost.exe' (Managed (v4.0.30319)): Loaded 'Anonymously Hosted DynamicMethods Assembly' A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll Any attempt to access a dynamic member seems to output a RuntimeBinderException to the debug logs. While I'm aware that first-chance exceptions are not a problem in and of themselves, this does cause some problems for me: I often have the debugger set to "break on exceptions", as I'm writing WPF apps, and otherwise all exceptions end up getting converted to a DispatcherUnhandledException, and all the actual information you want is lost. WPF sucks like that. As soon as I hit any code that's using dynamic, the debug output log becomes fairly useless. All the useful trace lines that I care about get hidden amongst all the useless RuntimeBinderExceptions Is there any way I can turn this off, or is the RuntimeBinder unfortunately just built like that? Thanks, Orion

    Read the article

  • Localize WiX installer which uses the Firewall extension

    - by tronda
    I've got a WiX installer project which uses MSBuild to generate the MSI file. The WXS file includes the WiX firewall extension: xmlns:fire="http://schemas.microsoft.com/wix/FirewallExtension" I've defined two cultures in the MSBuild file with the following definition: <PropertyGroup> ... <Cultures>en-us;no-no</Cultures> </PropertyGroup> I've also added the translated resources: <ItemGroup> <EmbeddedResource Include="lang\Firewall_no-no.wxl" /> <EmbeddedResource Include="lang\WixUI_no-no.wxl" /> </ItemGroup> These represents translation to Norwegian for the Firewall extension and the WixUI extension. When I run the build it succeeds with the en-us part, but the no-no part fails with the following error messages: C:\delivery\Dev\wix30_public\src\ext\FirewallExtension\wixlib\FirewallExtension.wxs(19): error LGHT0102: The localization variable !(loc.WixSchedFirewallExceptionsInstall) is unknown. Please ensure the variable is defined. .... Couple of issues: I don't know where the C:\delivery directory comes from. I don't have such a directory. The localization variables referenced in the error message have been translated in the Firewall_no-no.wxl file. When I run MSBuild with more detailed information I see the following output right before the error message: Task "Light" Command: C:\Program Files (x86)\Windows Installer XML v3\bin\Light.exe -cultures:no-no -ext "C:\Program Files (x86)\Windows Installer XML v3\bin\WixUIExtension.dll" -ext "C:\Program Files (x86)\Windows I nstaller XML v3\bin\WixUtilExtension.dll" -ext "C:\Program Files (x86)\Windows Installer XML v3\bin\WixFirewallExtension.dll" -loc lang\Firewall_no-no.wxl -loc lang\WixUI_no-no.wxl -out F:\Projects\MyProd\MyProj\Installer\bin\Debug\no-no\MyInstaller.msi -pdbout F:\Projects\MyProd\MyProj\Installer\bin\Debug\no-no\MyInstaller.wixpdb obj\Debug\MyProj.wixobj As the details show, the MSBuild task results in having two -loc parameters to the Light executable. Not sure if that would be the reason for this problem. Any ideas on how to solve this?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >