Search Results

Search found 9398 results on 376 pages for 'matt dev'.

Page 55/376 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Why are the analoge stereo input and output of my M-Audio 24/96 soundcard not available to me in Ubuntu Lucid

    - by MIDoubleKO
    I have installed Lucid on an old Mac PowerPC G4 desktop with a M-Audio Audiophile 24/96 soundcard. The only inputs and outputs I can select in the audio preferences are digital ones for the digital input and output. "lspci -v" shows the card as so: 0001:10:13.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. Device d634 Flags: bus master, medium devsel, latency 16, IRQ 53 I/O ports at 0440 [size=32] I/O ports at 04b0 [size=16] I/O ports at 04a0 [size=16] I/O ports at 0400 [size=64] Capabilities: <access denied> Kernel driver in use: ICE1712 Kernel modules: snd-ice1712 "cat /proc/asound/cards" as so: 0 [Tumbler ]: PMac Tumbler - PowerMac Tumbler PowerMac Tumbler (Dev 21) Sub-frame 0 1 [M2496 ]: ICE1712 - M Audio Audiophile 24/96 M Audio Audiophile 24/96 at 0x440, irq 53 "aplay -L" shows these as listed: pulse Playback/recording through the PulseAudio sound server front:CARD=Tumbler,DEV=0 PowerMac Tumbler, PowerMac Tumbler Front speakers front:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi Front speakers surround40:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.0 Surround output to Front and Rear speakers surround41:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.1 Surround output to Front, Center, Rear and Subwoofer speakers iec958:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi IEC958 (S/PDIF) Digital Audio Output I believe it is a problem with detecting the analogue input/output. Sometimes I can get sound from the device but it is a sheet of white noise and tinkering makes it go away again I don't know if that is a separate problem or if it is linked to not being able to see the analogue input/outputs in the sound preferences. Any help would be greatly appreciated As for the white noise I have installed the Envy24 control panel and spend lots of time playing with the settings but when I can get the white noise I can never get it to an quality where I can actually hear what is being played. The internal speaker plays audio fine and plugging in a NI Audio 4DJ via usb also plays sound, although with some static but I believe that is due to an underpowered usb2 pci expansion card not being able to get enough electricity to the device. Alternatively I have seen other people with problems with this device so it may be a bug in the driver but that is another matter. I would like to get the M-Audio card working so I can begin to enjoy my music once again. As a note, I do not currently have any audio equipment capable of sending or receiving audio via the digital inputs and output so I can not check if they are working. The sound preferences show a wide range of digital in and out options with various surround sound options but no analogue ins and outs.

    Read the article

  • Routing RFC1918 addresses through dd-wrt via a switch

    - by espenfjo
    I am a bit stuck with an experiment of mine. I have a network looking somewhat like this. | Internet | | ---- |Switch| ---- | | Server w/pub IP | DD-WRT router 192.168.1.1 | | RFC1918 clients 192.168.1.0/24 What I want is for the RFC1918 clients to speak directly with each others. On the server with the public IP I have this route: 192.168.1.0/24 dev eth0 scope link and can see that packets are infact reaching the dd-wrt router for 192.168.1.1, even though if I get no answer. Trying to reach one of the RFC1918 clients from the public IP server will get no result, as the dd-wrt router is not announcing that network on to its external interface (arp who-has 192.168.1.107 tell xxx.xxx.xxx.xxx, but no answer). The router being an WLAN dd-wrt router has of course a load of routes, VLANs and interfaces: xxx.xxx.xxx.1 dev vlan2 scope link 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.244 84.215.64.0/18 dev vlan2 proto kernel scope link src xxx.xxx.xxx.xxx 169.254.0.0/16 dev br0 proto kernel scope link src 169.254.255.1 127.0.0.0/8 dev lo scope link 0.0.0.0 via xxx.xxx.xxx.1 dev vlan2 xxx.xxx.xxx.xxx being the public IP, and xxx.xxx.xxx.1 being the default route for the public IP. I am not sure where to continue with this. I would recon that I both need routing on the dd-wrt router, as well as some iptables magic? Why do something this complex? Why not ;) Also, do not mind that "Internet" can get RFC1918 traffic, it wont go outside of the walls. EDIT 1: Following the tip from stew I do indeed get the correct ARP flowing. And adding an iptables rule for allowing traffic from that specific public IPd machine I get traffic between the systems! Oddly enough though, the speed I get from Server w/pub IP - RFC1918 clients are the same as if the traffic were routed out onto the Internet and back. Edit 2: Ok, disconnecting the external Internet connection will still give the same, crappy transfer speed. So it has to be something else. Edit 3: Ok, I guess there are other reasons for this crappy speed. Case closed. :)

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • Linux server became extremely slow

    - by Ariel Aharonson
    I have a file sharing website, and my files hosted in a server with those system specifications: 32GB RAM 12x3TB 2x Intel Quad Core E5620 I have files in this server up to 4gb for each file. 446gb is full (/36TB) [root@hosted-by ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 50G 2.7G 44G 6% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 97M 57M 36M 62% /boot /dev/mapper/VolGroup01-LogVol00 33T 494G 33T 2% /home And take a look at this: Why is the wa% so high? (I think that what makes the server to be so slow)

    Read the article

  • On Solaris, how do you mount a second zfs system disk for diagnostics?

    - by Matt Ball
    (Cross posted from Stack Overflow 1) I've got two hard disks in my computer, and have installed Solaris 10u8 on the first and Opensolaris 2010.3 (dev onnv_134) on the second. Both systems uses ZFS and were independently created with a zpool name of 'rpool'. While running Solaris 10u8 on the first disk, how do I mount the second ZFS hard disk (at /dev/dsk/c1d1s0) on an arbitrary mount point (like /a) for diagnostics?

    Read the article

  • Why does my Fedora HD come up as 124GB when it was created to be 700GB in Hyper-V?

    - by barfoon
    Any ideas? How can I tell fedora to use 700GB, and do I have reformat to do this? Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_computer-lv_root 124G 3.9G 114G 4% / /dev/sda1 194M 23M 162M 12% /boot tmpfs 995M 320K 994M 1% /dev/shm Hyper-V settings: Thank you, pvdisplay -C: PV VG Fmt Attr PSize PFree /dev/sda2 vg_mycomp lvm2 a- 127.29G 0 lvdisplay -C: LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv_root vg_mycomp -wi-ao 125.36G lv_swap vg_mycomp -wi-ao 1.94G vgdisplay -C: VG #PV #LV #SN Attr VSize VFree vg_mycomp 1 2 0 wz--n- 127.29G 0

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • should I put my multi-device btrfs filesystem on disk partitions or raw devices?

    - by Glyph
    If I'm going to create a multi-device btrfs filesystem. The official recommendation from the documentation apppears to be to create it on raw devices; i.e. /dev/sdb, /dev/sdc, etc, but this is not explained. Are there any advantages to creating a partition table on these devices first, either GPT or MBR, and then creating the filesystem on /dev/sdb1, /dev/sdc1 et cetera? Does feeding btrfs whole devices have some particular advantage, or are these basically equivalent?

    Read the article

  • Disk monitor script with long file systems

    - by DD.
    $ df -H Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_app001-lv_root 34G 12G 21G 35% / tmpfs 8.4G 0 8.4G 0% /dev/shm /dev/sda1 508M 54M 429M 12% /boot /dev/mapper/vg_app001-lv_home 19G 309M 17G 2% /home I want to run a disk monitor script but because the filesystem is so long the row has been split into two lines and the script fails. Any suggestions?

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Run command automatically as root after login

    - by J V
    I'm using evrouter to simulate keypresses from my mouses extra buttons. It works great but I need to run the command with sudo to make it work so I can't just use my DE to handle autostart. I considered init.d but from what I've heard this only works for different stages of boot, and I need this to run as root after login. $ cat .evrouterrc "Logitech G500" "/dev/input/event4" any key/277 "XKey/0" "Logitech G500" "/dev/input/event4" any key/280 "XKey/9" "Logitech G500" "/dev/input/event4" any key/281 "XKey/8" $ sudo evrouter /dev/input/event4

    Read the article

  • GWTAI applet integration in GWT problem

    - by andy
    Hi everybody. I'm working with gwtai to integrate a java applet into my gwt - project. Basic communication from my main application to the applet (such as invoking simple methods that return int or boolean values) works. But the main reason why I need to integrate this applet is, that I need it to connect to another server and receive a answer and pass it to my gwt-application. So there's one basic method in the applet: public String SendAndReceive(String host, int sendPort, int receivePort, String query) that connects to the server, receives an answer and returns this answer as a string. When I now try to invoke this method like this: applet.SendAndReceive("0.0.0.0", 9099, 2000, "show streams;"); I constantly run into following error (full error message at the end): com.google.gwt.core.client.JavaScriptException: (String): Error calling method on NPObject! [plugin exception: java.security.AccessControlException: access denied (java.util.PropertyPermission * read,write)] I couldn't find a solution (for gwtai is a quite uncommon topic), what I found out (and what the Exception let's one assume) is, that there's a security problem - maybe because I'm connecting to another server. I also read something about browser's Single Origin Policy, what would point in the same direction...up to now I have never worked with java applets. So if someone has a solution or a hint I would be very thankful. If more code is helpful I can give. Thanks, Andy the full error message: 21:03:49.864 [ERROR] [follovizergwt] Unable to load module entry point class follovizer.gwt.client.FolloVizerGWT (see associated exception for details) com.google.gwt.core.client.JavaScriptException: (String): Error calling method on NPObject! [plugin exception: java.security.AccessControlException: access denied (java.util.PropertyPermission * read,write)]. at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:195) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:120) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:507) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:264) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at follovizer.gwt.client.AnduINAppletImpl.SendAndReceive(AnduINAppletImpl.java) at follovizer.gwt.client.FolloVizerGWT.createLayout(FolloVizerGWT.java:92) at follovizer.gwt.client.FolloVizerGWT.onModuleLoad(FolloVizerGWT.java:40) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Gwt-gdata authentication

    - by Raffo
    Hi guys, I'm writing an application with GWT and I found on the internet that there's a library to use easily gdata features. In particular I need to use the integration with Google Calendar. I followed the official guide on gwt-gdata site to do the authentication ( http://code.google.com/p/gwt-gdata/wiki/GettingStarted ) but unfortunately, I got an error. This is the error: 17:59:12.463 [ERROR] [testmappa] Unable to load module entry point class testMappa.client.TestMappa (see associated exception for details) com.google.gwt.core.client.JavaScriptException: (TypeError): $wnd.google.accounts is undefined fileName: http://127.0.0.1:8888 lineNumber: 29 stack: ("http://www.google.com/calendar/feeds/")@http://127.0.0.1:8888:29 connect("http://127.0.0.1:8888/TestMappa.html?gwt.codesvr=127.0.0.1:9997","`gL1<a3s4B&Y{(Ci","127.0.0.1:9997","testmappa","2.0")@:0 ((void 0),"testmappa","http://127.0.0.1:8888/testmappa/")@http://127.0.0.1:8888/testmappa/hosted.html?testmappa:264 z()@http://127.0.0.1:8888/testmappa/testmappa.nocache.js:2 (-2)@http://127.0.0.1:8888/testmappa/testmappa.nocache.js:8 at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:195) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:120) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:507) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:264) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at com.google.gwt.accounts.client.User.login(User.java) at testMappa.client.TestMappa.onModuleLoad(TestMappa.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:637) I'm not able to understand the reason of this error. My code is simply: String scope = "http://www.google.com/calendar/feeds/"; User.login(scope); And as far as I know it should work as it is. I don't know what to do and I'm here to ask how to solve this problem and if I can directly use gdata native java library, but I believe this other thing is not possible to be done for client-side gwt code (since the code is going to be converted to javascript). Thank you.

    Read the article

  • How to setup CF9 with IIS7 (multiple instance, virtual hosting by hostname)

    - by Henry
    I'm used to setting up CF9 (Dev edition) on my Windows using Apache. I would like to try using IIS7 that comes with Win7 Pro. What are the steps to set it up so that I can have: www.siteA.dev www.siteB.dev Both of these point to 127.0.0.1 via windows host file. I would like siteA.dev & siteB.dev to use 2 different CF instances. I already installed CF9 dev edition with 2nd option. What should I do next? Do I need to use IIS manager? Or the CF's Web Server Config tool is all I need? Where to enter the data to IIS like vhost in Apache? Thank you

    Read the article

  • What does the "build-essential" & "build-dep" Terminal commands mean & do in Linux based operating s

    - by Adam Siddhi
    Hi. I am researching how to install Ruby 1.9.1 in Xubuntu 10.04 and I came across the command build-essential and build-dep multiple times. Sometimes it is followed by packages and sometimes it is both preceded and post-ceded by packages. The 2 examples I am looking at are: sudo apt-get install build-essential zlib1g zlib1g-dev zlibc libruby1.9 libxml2 libxml2-dev libxslt-dev sudo apt-get build-dep ruby1.9 and sudo apt-get install ruby irb ri rdoc ruby1.8-dev libzlib-ruby libyaml-ruby libreadline-ruby libncurses-ruby libcurses-ruby libruby libruby-extras libfcgi-ruby1.8 build-essential libopenssl-ruby libdbm-ruby libdbi-ruby libdbd-sqlite3-ruby sqlite3 libsqlite3-dev libsqlite3-ruby libxml-ruby libxml2-dev Thanks :adam

    Read the article

  • How do I get my Apache virtual hosts working?

    - by elliot100
    I'm trying to set up virtual hosts for local development and can't seem to get it working. I have this in my httpd.conf: NameVirtualHost * <VirtualHost *> ServerName localhost DocumentRoot C:/Users/Elliot/dev/UniServer/www </VirtualHost> <VirtualHost *> ServerName drupal.dev DocumentRoot C:/Users/Elliot/dev/UniServer/www/drupal.dev/httpdocs </VirtualHost> and this in C:\Windows\System32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 drupal.dev http://localhost resolves OK, http://drupal.dev/ does not. Any ideas welcomed...

    Read the article

  • What does the "build-essential" Terminal command mean & do in Linux based operating systems like Ubu

    - by Adam Siddhi
    Hi. I am researching how to install Ruby 1.9.1 in Xubuntu 10.04 and I came across the command build-essential multiple times. Sometimes it is followed by packages and sometimes it is both preceded and post-ceded by packages. The 2 examples I am looking at are: sudo apt-get install build-essential zlib1g zlib1g-dev zlibc libruby1.9 libxml2 libxml2-dev libxslt-dev and sudo apt-get install ruby irb ri rdoc ruby1.8-dev libzlib-ruby libyaml-ruby libreadline-ruby libncurses-ruby libcurses-ruby libruby libruby-extras libfcgi-ruby1.8 build-essential libopenssl-ruby libdbm-ruby libdbi-ruby libdbd-sqlite3-ruby sqlite3 libsqlite3-dev libsqlite3-ruby libxml-ruby libxml2-dev Thanks :adam

    Read the article

  • How to maipulate the shell output in php

    - by Mirage
    I am trying to write php script which does some shell functions like reporting. So i am starting with diskusage report I want in following format drive path ------------total-size --------free-space Nothing else My script is $output = shell_exec('df -h -T'); echo "<pre>$output</pre>"; and its ouput is like below Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext3 92G 6.6G 81G 8% / none devtmpfs 3.9G 216K 3.9G 1% /dev none tmpfs 4.0G 176K 4.0G 1% /dev/shm none tmpfs 4.0G 1.1M 4.0G 1% /var/run none tmpfs 4.0G 0 4.0G 0% /var/lock none tmpfs 4.0G 0 4.0G 0% /lib/init/rw /dev/sdb1 ext3 459G 232G 204G 54% /media/Server /dev/sdb2 fuseblk 466G 254G 212G 55% /media/BACKUPS /dev/sda5 fuseblk 738G 243G 495G 33% /media/virtual_machines How can i convert that ouput into my forn\matted output

    Read the article

  • Random password variable disappears

    - by snaken
    Hi, I'm using the following to generate a random password in a shell script: DBPASS=</dev/urandom tr -dc A-Za-z0-9| (head -c $1 > /dev/null 2>&1 || head -c 8) When i run this in a file on its own like this: #!/bin/sh DBPASS=</dev/urandom tr -dc A-Za-z0-9| (head -c $1 > /dev/null 2>&1 || head -c 8) echo $DBPASS A password is echoed. When i incorporate it into a larger script though the variable never seems to get created for some reason, so for example this doesn't work: DBPASS=</dev/urandom tr -dc A-Za-z0-9| (head -c $1 > /dev/null 2>&1 || head -c 8) sed -i s/oldpass/$DBPASS/ mysql_connect.php If i manually set the variable though everything is fine.. can anyone see why?

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >