Search Results

Search found 32520 results on 1301 pages for 'local machine'.

Page 145/1301 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Can't open shared drive after disconnecting vpn

    - by Matt McMinn
    I use a VPN to connect to my office network. On my local network I have another WinXP machine that shares a printer and a few shared folders. While I'm connected to my work VPN, I can access the shared printer and folders on the other machine just fine, and vice versa. Once I disconnect the VPN, I can't access the local machine any more, and the other machine can't access my machine. The network itself seems ok - I can ping the other machine, get to the internet, and get on to a web server shared by the other machine, but I can't get to the shared folders or printer. If I reconnect to the VPN, my access is restored. I'm guessing this is some sort of authentication thing, but I don't know what. Any ideas? Update This problem is bothering me again, so here's an update. Depending on when I first access the WinXP machine, I either have this problem, or the opposite problem. After a reboot, if I (for example) print, then connect to the VPN, I can't access the machine while on the VPN. If after a reboot I connect to the VPN, then print, I can't access the machine off the VPN. In both cases, if I enable/disable the VPN again, I can access the machine again. Thanks

    Read the article

  • Is it possible to to create a live linux iso containing a win xp virtual machine?

    - by mark
    I would like to have a Linux live system that contains a Windows xp virtual machine. This would be run from a bootable USB flash drive. My attempts so far have been unsuccessful. I created a Lubuntu 12.04 virtual machine with VMware. I updated and configured it to my needs, and installed Virtualbox. I then created a Windows xp vm with Virtualbox in the Lubuntu vm. I tested everything and everything worked, including USB devices. I installed Remastersys in the Lubuntu vm, copied the xp vm folder to the /etc/skel folder then created the custom iso with remastersys. I burned the iso and tested it on a laptop. It worked flawlessly. All programs and wireless networking worked. My problem was the xp vm. Virtualbox started fine but would not run the vm. I have the following error: Result Code: NS_ERROR_FAILURE (0x80004005) Component: VirtualBox Interface: IVirtualBox {c28be65f-1a8f-43b4-81f1-eb60cb516e66}. I ran remastersys again changing the permissions on the skel folder to R W for everyone. I also logged into Lubuntu as root and ran remastersys again. Each iso I created worked fine but would not start the xp vm inside. The last attempt virtualbox gave me an access error stating it can not access the virtual disk. Is what I want to do possible? In theory I don't see why it would not work. Is it a permissions issue? Should I create the iso then add the xp vm after by editing the iso by hand? Using a vm and not real hardware as a build machine a problem? Any ideas? keep any responses in laymens terms. I am still a Linux novice.

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Are two periods allowed in the local-part of an email address?

    - by Mike B
    A third-party email gateway relay is refusing to process a message for an email address we're sending to. The address is in the format of [email protected] (note the two periods). Is this allowed by RFC guidelines? RFC 2822 seems to object to this in section 3.4.1: The locally interpreted string is either a quoted-string or a dot-atom. If the string can be represented as a dot-atom (that is, it contains no characters other than atext characters or "." surrounded by atext characters), then the dot-atom form SHOULD be used and the quoted-string form SHOULD NOT be used. Comments and folding white space SHOULD NOT be used around the "@" in the addr-spec. Furthermore, in that same section, it references this: addr-spec = local-part "@" domain local-part = dot-atom / quoted-string / obs-local-part I interpret this to mean that the localpart can have content separated by dots but there cannot be two successive dots, and it cannot start or end with a dot. That being said, I'm not familiar with dot-atom syntax so maybe I'm mistaken here. Can someone please confirm and explain?

    Read the article

  • Print over the internet from a remote linux session locally (on a Windows 7 machine) to the shared printers?

    - by obeliksz
    I'm trying to use a linux virtual machine as a file server for windows clients. I have successfully implemented remote file sharing (samba+ssh) with which I am able to print locally with a little program that I made for this purpose (jetforms style)... but I would like to hear about a somewhat more direct approach. How can I attach the printers to the server, so that I can for example open a file on the remote session and in the print dialogbox I would see my local printers (on the machine from which I have established a remote session)? I guess there should be some kind of putty tunneling, but dont know how. I have a windows 7 machine locally; there is a CentOS 6 VM over the internet. It has ssh, cups, and samba. I have found a question which asks the opposite: there is a windows based server to connect form linux but that windows has a domain, mine is just a simple windows workstation that is behind NAT and has a dynamic IP. That question is: Print from Linux to Windows networked printer.

    Read the article

  • My server is taking too long to respond when I connect to it outside the local network [closed]

    - by Buzu
    I have my local server online most of the time because it is easy for clients to access a url and see how their project is coming along. They can see updates in real time. However, I got a message from one of my clients saying that the server was not responding. I have a hosts file, and in that file I have my server's address pointed to the local ip. This is because some problem with the ftp. Due to this setup, I had not noticed that the server was not accessible from outside the local network. The address is http://imbuzu.dyndns.info SSH works fine, I can connect from my windows machine to the server, but HTTP does not. The server is taking too long to respond. Looking at the logs, I see that the last incoming connection to the server from outside the network is this: 77.242.153.180 - - [04/Dec/2012:12:11:01 -0800] "\xce\x89\x8d\x85b\ro" 400 317 "-" "-" I'm going to restart the server, but I doubt that will have any effect on it. --EDIT-- I restarted the server, and it did not help. Also, I pinged the server and it seems to be resolving correctly.

    Read the article

  • How to Increase the VMWare Boot Screen Delay

    - by Trevor Bekolay
    If you’ve wanted to try out a bootable CD or USB flash drive in a virtual machine environment, you’ve probably noticed that VMWare’s offerings make it difficult to change the boot device. We’ll show you how to change these options. You can do this either for one boot, or permanently for a particular virtual machine. Even experienced users of VMWare Player or Workstation may not recognize the screen above – it’s the virtual machine’s BIOS, which in most cases flashes by in the blink of an eye. If you want to boot up the virtual machine with a CD or USB key instead of the hard drive, then you’ll need more than an eye’s-blink to press Escape and bring up the Boot Menu. Fortunately, there is a way to introduce a boot delay that isn’t exposed in VMWare’s graphical interface – you have to edit the virtual machine’s settings file (a .vmx file) manually. Editing the Virtual Machine’s .vmx Find the .vmx file that contains the settings for your virtual machine. You chose a location for this when you created the virtual machine – in Windows, the default location is a folder called My Virtual Machines in your My Documents folder. In VMWare Workstation, the location of the .vmx file is listed on the virtual machine’s tab. If in doubt, search your hard drive for .vmx files. If you don’t want to use Windows default search, an awesome utility that locates files instantly is Everything. Open the .vmx file with any text editor. Somewhere in this file, enter in the following line… save the file, then close out of the text editor: bios.bootdelay = 20000 This will introduce a 20 second delay when the virtual machine loads up, giving you plenty of time to press the Escape button and access the boot menu. The number in this line is just a value in milliseconds, so for a five second boot delay, enter 5000, and so on. Change Boot Options Temporarily Now, when you boot up your virtual machine, you’ll have plenty of time to enter one of the keystrokes listed at the bottom of the BIOS screen on boot-up. Press Escape to bring up the Boot Menu. This allows you to select a different device to boot from – like a CD drive. Your selection will be forgotten the next time you boot up this virtual machine. Change Boot Options Permanently When the BIOS screen comes up, press F2 to enter the BIOS Setup menu. Switch to the Boot tab, and change the ordering of the items by pressing the “+” key to move items up on the list, and the “-” key to move items down the list. We’ve switched the order so that the CD-ROM Drive boots first. Once you make this change permanent, you may want to re-edit the .vmx file to remove the boot delay. Boot from a USB Flash Drive One thing that is noticeably missing from the list of boot options is a USB device. VMWare’s BIOS just does not allow this, but we can get around that limitation using the PLoP Boot Manager that we’ve previously written about. And as a bonus, since everything is virtual anyway, there’s no need to actually burn PLoP to a CD. Open the settings for the virtual machine you want to boot with a USB drive. Click on Add… at the bottom of the settings screen, and select CD/DVD Drive. Click Next. Click the Use ISO Image radio button, and click Next. Browse to find plpbt.iso or plpbtnoemul.iso from the PLoP zip file. Ensure that Connect at power on is checked, and then click Finish. Click OK on the main Virtual Machine Settings page. Now, if you use the steps above to boot using that CD/DVD drive, PLoP will load, allowing you to boot from a USB drive! Conclusion We’re big fans of VMWare Player and Workstation, as they let us try out a ton of geeky things without worrying about harming our systems. By introducing a boot delay, we can add bootable CDs and USB drives to the list of geeky things we can try out. Download PLoP Boot Manager Similar Articles Productive Geek Tips How To Switch to Console Mode for Ubuntu VMware GuestHack: Turn Off Debug Mode in VMWare Workstation 6 BetaStart Your Computer More Quickly by Delaying the Startup of a Service in VistaEnable Hidden BootScreen in Windows VistaEnable Copy and Paste from Ubuntu VMware Guest TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • WEBrick::HTTPStatus::LengthRequired error when accessing create method in controller

    - by Chris Bisignani
    I have a very simple controller set up: class LibrariesController < ApplicationController ... def create @user.libraries << Library.new(params) @user.save render :json => "success!" end ... end Basically, whenever I try to access the create method of LibrariesController using HTTParty.post I get a WEBrick::HTTPStatus::LengthRequired error on the server. The method is not even being accessed! Here is the stack trace (this is the full output server side - notice that the controller isn't even being accessed): [2010-04-16 00:35:39] ERROR WEBrick::HTTPStatus::LengthRequired [2010-04-16 00:35:39] ERROR HTTPRequest#fixup: WEBrick::HTTPStatus::LengthRequired occured. [2010-04-16 00:35:39] ERROR NoMethodError: private method `gsub!' called for #<Class:0x2362160> /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/htmlutils.rb:17:in `escape' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/httpresponse.rb:232:in `set_error' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/httpserver.rb:70:in `run' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:162:in `start' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:95:in `start' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:92:in `each' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:92:in `start' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:23:in `start' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/1.8/webrick/server.rb:82:in `start' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/Cellar/ruby_187/1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 I'm running rails 2.3.5 and ruby 1.8.7. Any help would be greatly appreciated. Let me know if you need more details.

    Read the article

  • Helping install mrcwa and solve problems with f2py in Ubuntu 14.04 LTS

    - by user288160
    I am sorry if this is the wrong section but I am starting to get desperate, please someone help me... I need to install the program mrcwa-20080820 (sourceforge.net/projects/mrcwa/) because a summer project that I am involved. I need to use it together with anaconda (store.continuum.io/cshop/anaconda/), I already installed Anaconda and apparently it is working. When I type: conda --version I got the expected answer. conda 3.5.2 If I tried to import numpy or scipy with python or simple type f2py there are no errors. So far so good. But when I tried to install this program sudo python setup.py install I got these errors: running install running build sh: 1: f2py: not found cp: cannot stat ‘mrcwaf.so’: No such file or directory running build_py running install_lib running install_egg_info Removing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Writing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Obs: I am trying to use intel fortran 64-bits and Ubuntu 14.04 LTS. So I was checking f2py and tried to execute the program hello world f2py -c -m hello hello.f from here: cens.ioc.ee/projects/f2py2e/index.html#usage and I had some problems too: running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "hello" sources f2py options: [] f2py:> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c creating /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 Reading fortran codes... Reading file 'hello.f' (format:fix,strict) Post-processing... Block: hello Block: foo Post-processing (stage 2)... Building modules... Building module "hello"... Constructing wrapper function "foo"... foo(a) Wrote C/API module "hello" to file "/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 /hellomodule.c" adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c' to sources. adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7' to include_dirs. copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Found executable /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler using build_ext building 'hello' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmpf8P4Y3/tmp creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3 creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local/lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c:17: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c:2: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ compiling Fortran sources Fortran f77 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict Fortran f90 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FR -fPIC -xhost -openmp -fp-model strict Fortran fix compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local /lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' ifort:f77: hello.f /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -shared -shared -nofor_main /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.o /tmp/tmpf8P4Y3 /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.o /tmp/tmpf8P4Y3/hello.o -L/home/felipe /anaconda/lib -lpython2.7 -o ./hello.so Removing build directory /tmp/tmpf8P4Y3 Please help me I am new in ubuntu and python. I really need this program, my advisor is waiting an answer. Thank you very much, Felipe Oliveira.

    Read the article

  • what do i need to do so that mod_wsgi will find libmysqlclient.16.dylib? (osx 10.7 with apache mod_wsgi)

    - by compound eye
    I am trying to run django on osx 10.7 (lion) with apache mod_wsgi and virtualenv. My site works if I use the django testing server: (baseline)otter:hello mathew$ python manage.py runserver but it doesn't work when I run apache. The core of the error seems to be Library not loaded: libmysqlclient.16.dylib I think its to do with the path apache is using to locate libmysqlclient.16.dylib when I run otool in the lib directory it looks good otter:lib mathew$ pwd /usr/local/mysql/lib otter:lib mathew$ otool -L libmysqlclient.16.dylib libmysqlclient.16.dylib: libmysqlclient.16.dylib (compatibility version 16.0.0, current version 16.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) but from outside it can't find it otter:lib mathew$ cd / otter:/ mathew$ otool -L libmysqlclient.16.dylib otool: can't open file: libmysqlclient.16.dylib (No such file or directory) if i manually set DYLD_LIBRARY_PATH otool works otter:lib mathew$ DYLD_LIBRARY_PATH=/usr/local/mysql/lib otter:lib mathew$ otool -L libmysqlclient.16.dylib libmysqlclient.16.dylib: libmysqlclient.16.dylib (compatibility version 16.0.0, current version 16.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) When I run the django testing server, my .bash_profile sets up the virtualenv and the path to the mysql dynamic library export DYLD_LIBRARY_PATH=/usr/local/mysql/lib/:$DYLD_LIBRARY_PATH export PATH When i run apache it finds my virtualenv paths, but it doesn't seem to find the dynamic library path. I tried adding this path to /usr/sbin/envvars DYLD_LIBRARY_PATH="/usr/lib:/usr/local/mysql/lib:$DYLD_LIBRARY_PATH" export DYLD_LIBRARY_PATH and to /private/etc/paths.d/libmysql /usr/local/mysql/lib then restarted the machine but that has not changed the error message. Error loading MySQLdb module: dlopen(/usr/local/python_virtualenv/baseline/lib/python2.7/site-packages/_mysql.so, 2): Library not loaded: libmysqlclient.16.dylib I don't think is a permissions issue: -rwxr-xr-x 1 root wheel 3787328 4 Dec 2010 libmysqlclient.16.dylib drwxr-xr-x 39 root wheel 1394 18 Nov 21:07 / drwxr-xr-x@ 15 root wheel 510 24 Oct 22:10 /usr drwxrwxr-x 20 root admin 680 2 Nov 20:22 /usr/local drwxr-xr-x 20 mathew admin 680 9 Nov 21:58 /usr/local/python_virtualenv drwxr-xr-x 6 mathew admin 204 2 Nov 21:36 /usr/local/python_virtualenv/baseline drwxr-xr-x 4 mathew admin 136 2 Nov 21:26 /usr/local/python_virtualenv/baseline/lib drwxr-xr-x 52 mathew admin 1768 2 Nov 21:26 /usr/local/python_virtualenv/baseline/lib/python2.7 drwxr-xr-x 18 mathew admin 612 4 Nov 21:20 /usr/local/python_virtualenv/baseline/lib/python2.7/site-packages -rwxr-xr-x 1 mathew admin 66076 2 Nov 21:18 /usr/local/python_virtualenv/baseline/lib/python2.7/site-packages/_mysql.so What do i need to do so that mod_wsgi will find libmysqlclient.16.dylib? apache and mysql are both 64 bit: otter:lib mathew$ file /usr/sbin/httpd /usr/sbin/httpd: Mach-O universal binary with 2 architectures /usr/sbin/httpd (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/sbin/httpd (for architecture i386): Mach-O executable i386 otter:lib mathew$ otter:lib mathew$ file /usr/local/mysql/lib/libmysqlclient.16.dylib /usr/local/mysql/lib/libmysqlclient.16.dylib: Mach-O 64-bit dynamically linked shared library x86_64

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to compile and run?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap Now here's something interesting. The error message shows it's looking in /opt/local/lib/libiconv.2.dylib. otools -L shows that this does implement 8.0.0: tppllc-Mac-Pro:.libs swirsky$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /usr/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) tppllc-Mac-Pro:.libs swirsky$ And, for good measure, I set the DYLD_LIBRARY_PATH to make sure this directory is the one for dynamic libraries. So even though I do have a library that provides 8.0.0, it's being seen as 7.0.0! Any ideas why this would happen? So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • Is there other ways to do insert/update/delete on a remote oracle database?

    - by gunbuster363
    I asked a question recently concerning the speed of execution of insert/update/delete using JDBC driver in a remote machine, but the problem cannot be solved easily. I would like to ask, is there any other way to execute the insert/update/delete to the oracle? The current situation is this: the DB is on a seperate machine than the java program used to update the DB. I looked up the internet and found people suggesting using pure sql or pl/sql to do the update, is that possible? And do we need to operate the sql or pl/sql in a local machine? Because I have no knowledge about pl/sql, so I am not sure if we can create some kind of script and call it on a remote machine. Let say the situation is like this: the input data is on machine A, and the original java program are also on machine A, but the oracle is on machine B. is there any other approach other than JDBC?

    Read the article

  • Error mpicc command not found [closed]

    - by skn
    I want to compile hdf5 but I find the following error: /hdf5/hdf5-1.6.9CC=/usr/local/openmpi/bin/mpicc ./configure /home/sknandi/Research/ Simulation/hdf5/parallel_fdf5 CC=/usr/local/openmpi/bin/mpicc: Command not found. The result of echo $PATH is /hdf5/hdf5-1.6.9echo $PATH /priv/myriad3/ayw/research/COALA/visit/bin:/usr/local/bin:/usr/bin:/bin:/pkg/linux/intel/composerxe-2011.3.174/composerxe-2011.3.174/bin/intel64:/pkg/linux/casa/x86_64:/usr/local/bin:/bin:/usr/bin:/usr/local/openmpi/bin:/pkg/linux/intel/composerxe-2011.3.174/composerxe-2011.3.174/mpirt/bin/intel64:/pkg/linux/SS12/solstudio12.2/bin:/usr/local/vanilla-pds/bin and result of which mpicc is /hdf5/hdf5-1.6.9which mpicc /usr/local/openmpi/bin/mpicc

    Read the article

  • How large should I make root, home, usr, var, and tmp partitions?

    - by Teddy Okidoki
    i install ubuntu server 10.04, have 64 Gb VHD. And want to separate partitions in this mode: /dev/xvda0 p on swap (2 Gb) /dev/xvda1a0 e on /boot (128 Mb) /dev/xvda1a1 e on / type ffs (local) /dev/xvda1a2 e on /usr type ffs (local, nodev) /dev/xvda1a3 e on /tmp type ffs (local, nodev) /dev/xvda1a4 e on /var/log type ffs (local, nodev) /dev/xvda1a5 e on /var type ffs (local, nodev, nosuid) /dev/xvda1a6 e on /home type ffs (local, nodev, nosuid, with quotas) /dev/xvda2 p on /new (local, nodev, nosuid, noexec) with rest of space ~50Gb. But i'ma stuck, and don't know what size to give to each folder. Also i want to encrypt partitions. Thank you for any tips. EDIT: System need minimum size, here will be installed about 10 apps like ufw, apache,mysql, chkrootkit and so on.

    Read the article

  • How do I install PyYAML into local install of Python?

    - by Daryl Spitzer
    I've installed Python 2.6.4 into (a subdirectory in) my home directory on a Linux machine with Python 2.3.4 pre-installed, because I need to run some code that I've decided would require too much work to make it run on 2.3.4. (I'm not on the sudoers list for that machine.) I was hoping I could run ~/Python-2.6.4/python setup.py install (from the PyYAML directory in my home directory, where I untarred the PyYAML sources) and it would be smart enough to install it into my local Python 2.6.4 install. But it's not. (See the P.S.) Is it possible to install PyYAML into my local Python install, so that "import yaml" will work when I invoke that Python? If so, how do I do that? P.S. Here's the output when I ran ~/Python-2.6.4/python setup.py install: running install running build running build_py creating build/lib.linux-ppc64-2.6 creating build/lib.linux-ppc64-2.6/yaml copying lib/yaml/composer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/nodes.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/dumper.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/resolver.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/events.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/emitter.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/error.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/loader.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/cyaml.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/scanner.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/__init__.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/serializer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/reader.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/representer.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/constructor.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/tokens.py -> build/lib.linux-ppc64-2.6/yaml copying lib/yaml/parser.py -> build/lib.linux-ppc64-2.6/yaml running build_ext creating build/temp.linux-ppc64-2.6 checking if libyaml is compilable gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/dspitzer/Python-2.6.4/Include -I/home/dspitzer/Python-2.6.4 -c build/temp.linux-ppc64-2.6/check_libyaml.c -o build/temp.linux-ppc64-2.6/check_libyaml.o build/temp.linux-ppc64-2.6/check_libyaml.c:2:18: yaml.h: No such file or directory build/temp.linux-ppc64-2.6/check_libyaml.c: In function `main': build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: `yaml_parser_t' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: (Each undeclared identifier is reported only once build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: for each function it appears in.) build/temp.linux-ppc64-2.6/check_libyaml.c:5: error: syntax error before "parser" build/temp.linux-ppc64-2.6/check_libyaml.c:6: error: `yaml_emitter_t' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:8: warning: implicit declaration of function `yaml_parser_initialize' build/temp.linux-ppc64-2.6/check_libyaml.c:8: error: `parser' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:9: warning: implicit declaration of function `yaml_parser_delete' build/temp.linux-ppc64-2.6/check_libyaml.c:11: warning: implicit declaration of function `yaml_emitter_initialize' build/temp.linux-ppc64-2.6/check_libyaml.c:11: error: `emitter' undeclared (first use in this function) build/temp.linux-ppc64-2.6/check_libyaml.c:12: warning: implicit declaration of function `yaml_emitter_delete' libyaml is not found or a compiler error: forcing --without-libyaml (if libyaml is installed correctly, you may need to specify the option --include-dirs or uncomment and modify the parameter include_dirs in setup.cfg) running install_lib creating /usr/local/lib/python2.6 error: could not create '/usr/local/lib/python2.6': Permission denied

    Read the article

  • Postmaster uses excessive CPU and Disk Writes

    - by wolfcastle
    using PostgreSQL 9.1.2 I'm seeing excessive CPU usage and large amounts of writes to disk from postmaster tasks. This happens even while my application is doing almost nothing (10s of inserts per MINUTE). There are a reasonable number of connections open however. I've been trying to determine what in my application is causing this. I'm pretty newb with postgresql, and haven't gotten anywhere so far. I've turned on some logging options in my config file, and looked at connections in the pg_stat_activity table, but they are all idle. Yet each connection consumes ~ 50% CPU, and is writing ~15M/s to disk (reading nothing). I'm basically using the stock postgresql.conf with very little tweaks. I'd appreciate any advice or pointers on what I can do to track this down. Here is a sample of what top/iotop is showing me: Cpu(s): 18.9%us, 14.4%sy, 0.0%ni, 53.4%id, 11.8%wa, 0.0%hi, 1.5%si, 0.0%st Mem: 32865916k total, 7263720k used, 25602196k free, 575608k buffers Swap: 16777208k total, 0k used, 16777208k free, 4464212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17057 postgres 20 0 236m 33m 13m R 45.0 0.1 73:48.78 postmaster 17188 postgres 20 0 219m 15m 11m R 42.3 0.0 61:45.57 postmaster 17963 postgres 20 0 219m 16m 11m R 42.3 0.1 27:15.01 postmaster 17084 postgres 20 0 219m 15m 11m S 41.7 0.0 63:13.64 postmaster 17964 postgres 20 0 219m 17m 12m R 41.7 0.1 27:23.28 postmaster 18688 postgres 20 0 219m 15m 11m R 41.3 0.0 63:46.81 postmaster 17088 postgres 20 0 226m 24m 12m R 41.0 0.1 64:39.63 postmaster 24767 postgres 20 0 219m 17m 12m R 41.0 0.1 24:39.24 postmaster 18660 postgres 20 0 219m 14m 9.9m S 40.7 0.0 60:51.52 postmaster 18664 postgres 20 0 218m 15m 11m S 40.7 0.0 61:39.61 postmaster 17962 postgres 20 0 222m 19m 11m S 40.3 0.1 11:48.79 postmaster 18671 postgres 20 0 219m 14m 9m S 39.4 0.0 60:53.21 postmaster 26168 postgres 20 0 219m 15m 10m S 38.4 0.0 59:04.55 postmaster Total DISK READ: 0.00 B/s | Total DISK WRITE: 195.97 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17962 be/4 postgres 0.00 B/s 14.83 M/s 0.00 % 0.25 % postgres: aggw aggw [local] idle 17084 be/4 postgres 0.00 B/s 15.53 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17963 be/4 postgres 0.00 B/s 15.00 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17188 be/4 postgres 0.00 B/s 14.80 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17964 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 18664 be/4 postgres 0.00 B/s 15.13 M/s 0.00 % 0.23 % postgres: aggw aggw [local] idle 17088 be/4 postgres 0.00 B/s 14.71 M/s 0.00 % 0.13 % postgres: aggw aggw [local] idle 18688 be/4 postgres 0.00 B/s 14.72 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 24767 be/4 postgres 0.00 B/s 14.93 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18671 be/4 postgres 0.00 B/s 16.14 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 17057 be/4 postgres 0.00 B/s 13.58 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 26168 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18660 be/4 postgres 0.00 B/s 15.85 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle

    Read the article

  • Make mod_wsgi use python2.7.2 instead of python2.6?

    - by guron
    i am running Ubuntu 10.04.1 LTS and it came pre-packed with python2.6 but i need to replace it with python2.7.2. (The reason is simple, 2.7 has a lot of features backported from 3 ) i had installed python2.7.2 using ./configure make make altinstall the altinstall option installed it, without touching the system default version, to /usr/local/lib/python2.7 and placed the interpreter in /usr/local/bin/python2.7 Then to help mod_wsgi find python2.7 i added the following to /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local i start apache and run a test wsgi app BUT i am greeted by python 2.6.5 and not Python2.7 Later i replaced the default python simlink to point to python 2.7 ln -f /usr/local/bin/python2.7 /usr/bin/python Now typing 'python' on the console opens python2.7 but somehow mod_wsgi still picks up python2.6 Next i tried, PATH=/usr/local/bin:$PATH export PATH then do a quick restart apache, but yet again its python2.6 !! Here is my $PATH /usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games contents of /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local <VirtualHost *:80> ServerName wsgitest.local DocumentRoot /home/wwwhost/pydocs/wsgi <Directory /home/wwwhost/pydocs/wsgi> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/wwwhost/pydocs/wsgi/app.wsgi </VirtualHost> app.wsgi import sys def application(environ, start_response): status = '200 OK' output = sys.version response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Apache error.log 'import site' failed; use -v for traceback [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23235): Initializing Python. [Sun Jun 19 00:27:21 2011] [notice] Apache/2.2.14 (Ubuntu) mod_wsgi/2.8 Python/2.6.5 configured -- resuming normal operations [Sun Jun 19 00:27:21 2011] [info] Server built: Nov 18 2010 21:20:56 [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23238): Attach interpreter ''. [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23239): Attach interpreter ''. [Sun Jun 19 00:27:31 2011] [info] mod_wsgi (pid=23238): Create interpreter 'wsgitest.local|'. [Sun Jun 19 00:27:31 2011] [info] [client 192.168.1.205] mod_wsgi (pid=23238, process='', application='wsgitest.local|'): Loading WSGI script '/home/wwwhost/pydocs/$ [Sun Jun 19 00:27:50 2011] [info] mod_wsgi (pid=23239): Create interpreter 'wsgitest.local|'. Has anybody ever managed to make mod_wsgi run on a non-system default version of python ?

    Read the article

  • OSError : [Errno 38] Function not implemented - Django Celery implementation

    - by Jordan Messina
    I installed django-celery and I tried to start up the worker server but I get an OSError that a function isn't implemented. I'm running CentOS release 5.4 (Final) on a VPS: . broker -> amqp://guest@localhost:5672/ . queues -> . celery -> exchange:celery (direct) binding:celery . concurrency -> 4 . loader -> djcelery.loaders.DjangoLoader . logfile -> [stderr]@WARNING . events -> OFF . beat -> OFF [2010-07-22 17:10:01,364: WARNING/MainProcess] Traceback (most recent call last): [2010-07-22 17:10:01,364: WARNING/MainProcess] File "manage.py", line 11, in <module> [2010-07-22 17:10:01,364: WARNING/MainProcess] execute_manager(settings) [2010-07-22 17:10:01,364: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager [2010-07-22 17:10:01,364: WARNING/MainProcess] utility.execute() [2010-07-22 17:10:01,364: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute [2010-07-22 17:10:01,365: WARNING/MainProcess] self.fetch_command(subcommand).run_from_argv(self.argv) [2010-07-22 17:10:01,365: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/django/core/management/base.py", line 191, in run_from_argv [2010-07-22 17:10:01,365: WARNING/MainProcess] self.execute(*args, **options.__dict__) [2010-07-22 17:10:01,365: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/django/core/management/base.py", line 218, in execute [2010-07-22 17:10:01,365: WARNING/MainProcess] output = self.handle(*args, **options) [2010-07-22 17:10:01,365: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/django_celery-2.0.0-py2.6.egg/djcelery/management/commands/celeryd.py", line 22, in handle [2010-07-22 17:10:01,366: WARNING/MainProcess] run_worker(**options) [2010-07-22 17:10:01,366: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/bin/celeryd.py", line 385, in run_worker [2010-07-22 17:10:01,366: WARNING/MainProcess] return Worker(**options).run() [2010-07-22 17:10:01,366: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/bin/celeryd.py", line 218, in run [2010-07-22 17:10:01,366: WARNING/MainProcess] self.run_worker() [2010-07-22 17:10:01,366: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/bin/celeryd.py", line 312, in run_worker [2010-07-22 17:10:01,367: WARNING/MainProcess] worker.start() [2010-07-22 17:10:01,367: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/worker/__init__.py", line 206, in start [2010-07-22 17:10:01,367: WARNING/MainProcess] component.start() [2010-07-22 17:10:01,367: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/concurrency/processes/__init__.py", line 54, in start [2010-07-22 17:10:01,367: WARNING/MainProcess] maxtasksperchild=self.maxtasksperchild) [2010-07-22 17:10:01,367: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/concurrency/processes/pool.py", line 448, in __init__ [2010-07-22 17:10:01,368: WARNING/MainProcess] self._setup_queues() [2010-07-22 17:10:01,368: WARNING/MainProcess] File "/usr/local/lib/python2.6/site-packages/celery-2.0.1-py2.6.egg/celery/concurrency/processes/pool.py", line 564, in _setup_queues [2010-07-22 17:10:01,368: WARNING/MainProcess] self._inqueue = SimpleQueue() [2010-07-22 17:10:01,368: WARNING/MainProcess] File "/usr/local/lib/python2.6/multiprocessing/queues.py", line 315, in __init__ [2010-07-22 17:10:01,368: WARNING/MainProcess] self._rlock = Lock() [2010-07-22 17:10:01,368: WARNING/MainProcess] File "/usr/local/lib/python2.6/multiprocessing/synchronize.py", line 117, in __init__ [2010-07-22 17:10:01,369: WARNING/MainProcess] SemLock.__init__(self, SEMAPHORE, 1, 1) [2010-07-22 17:10:01,369: WARNING/MainProcess] File "/usr/local/lib/python2.6/multiprocessing/synchronize.py", line 49, in __init__ [2010-07-22 17:10:01,369: WARNING/MainProcess] sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue) [2010-07-22 17:10:01,369: WARNING/MainProcess] OSError [2010-07-22 17:10:01,369: WARNING/MainProcess] : [2010-07-22 17:10:01,369: WARNING/MainProcess] [Errno 38] Function not implemented Am I just totally screwed and should use a new kernel that has this implemented or is there an easy way to resolve this?

    Read the article

  • hyper-v fails when attaching more disk to VM. The VM won't start and generates an error

    - by CasperDK
    I'm lost at what to do about this: Hi... System: Windows 2008 R2 Hyper-V farm running with failover cluster with a EVA 4400 as backend. When I attach a new disk to a VM it fails when I try to start it. If I move the VM to another, say node 1, I can add the disk and I can get them to start. If I move the VM back to node 2 where the problem arose and the VM is running, I get an error during live migration and the VM fails back to node1 where it did run... So it's like there is something wrong with Hyper-V on node 2 and not node 1. Also node 3 has the same issue. Restarting the nodes is NOT an option since I will have this problem again at a later time AND because not all the VMs can run on node 1 which means my client company will experience downtime on the VMs not running on node 1. Any fix for this? An update I have missed perhaps? It has been two years... Here are the errors: An error ocurred while attempting to change the state of virtual machine XXX. 'XXX' failed to start. Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' 'XXX' failed to start. (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) An error ocurred while attempting to change the state of virtual machine XXX. 'XXX' failed to start. Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' 'XXX' failed to start. (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'X:\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) An error ocurred while attempting to change the state of virtual machine XXX. 'XXX' failed to start. Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' Failed to open attachment 'c:\clusterstorage/volume1/XXX.vhd'. Error: 'A device attached to the system is not functioning.' Failed to open attachment 'c:\clusterstorage/volume1\XXX.vhd'. Error: 'A device attached to the system is not functioning.' 'XXX' failed to start. (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) Microsoft Emulated IDE Controller (Instance ID {83F8638B-8DCA-4152-9EDA-2CA8B33039B4}): Failed to power on with Error 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'c:\clusterstorage/volume1\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) 'XXX': Failed to open attachment 'c:\clusterstorage/volume1\XXX.vhd'. Error: 'A device attached to the system is not functioning.' (0x8007001F). (Virtual machine 36563C78-65B5-4C40-A52D-689BB39E8B08) In the Hyper-V logs I found some more errors: In the hyper-v VMMS logs I have this: 'ServerName' failed to perform the operation. The virtual machine is not in a valid state to perform the operation. (Virtual machine ID 0A6CC4A9-39D6-4413-8CF0-B6DAA35B68D7)

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • python/pip error on osx

    - by Ibrahim Chawa
    I've recently purchased a new hard drive and installed a clean copy of OS X Mavericks. I installed python using homebrew and i need to create a python virtual environment. But when ever i try to run any command using pip, I get this error. I haven't been able to find a solution online for this problem. Any reference would be appreciated. Here is the error I'm getting. ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha512 Traceback (most recent call last): File "/usr/local/bin/pip", line 9, in <module> load_entry_point('pip==1.5.6', 'console_scripts', 'pip')() File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 356, in load_entry_point File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 2439, in load_entry_point File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 2155, in load File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/__init__.py", line 10, in <module> from pip.util import get_installed_distributions, get_prog File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 18, in <module> from pip._vendor.distlib import version File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/version.py", line 14, in <module> from .compat import string_types File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/compat.py", line 31, in <module> from urllib2 import (Request, urlopen, URLError, HTTPError, ImportError: cannot import name HTTPSHandler If you need any extra information from me let me know, this is my first time posting a question here. Thanks.

    Read the article

  • Java multiple class compositing and boiler plate reduction

    - by h2g2java
    We all know why Java does/should not have multiple inheritance. So this is not questioning about what has already been debated till-cows-come-home. This discusses what we would do when we wish to create a class that has the characteristics of two or more other classes. Probably, most of us would do this to "inherit" from three classes. For simplicity, I left out the constructor.: class Car extends Vehicle { final public Transport transport; final public Machine machine; } So that, Car class directly inherits methods and objects of Vehicle class, but would have to refer to transport and machine explicitly to refer to objects instantiated in Transport and Machine. Car car = new Car(); car.drive(); // from Vehicle car.transport.isAmphibious(); // from Transport car.machine.getCO2Footprint(); // from Machine I thought this was a good idea until when I encounter frameworks that require setter and getter methods. For example, the XML <Car amphibious='false' footPrint='1000' model='Fordstatic999'/> would look for the methods setAmphibious(..), setFootPrint(..) and setModel(..). Therefore, I have to project the methods from Transport and Machine classes class Car extends Vehicle { final public Transport transport; final public Machine machine; public void setAmphibious(boolean b){ this.transport.setAmphibious(b); } public void setFootPrint(String fp){ this.machine.setFootPrint(fp); } } This is OK, if there were just a few characteristics. Right now, I am trying to adapt all of SmartGWT into GWT UIBinder, especially those classes that are not a GWT widget. There are lots of characteristics to project. Wouldn't it be nice if there exists some form of annotation framework that is like this: class Car extends Vehicle @projects {Transport @projects{Machine @projects Guzzler}} { /* No need to explicitly instantiate Transport, Machine or Guzzler */ .... } Where, in case of common names of characteristics exist, the characteristics of Machine would take precedence Guzzler's, and Transport's would have precedence over Machine's, and Vehicle's would have precedence over Transport's. The annotation framework would then instantiate Transport, Machine and Guzzler as hidden members of Car and expand to break-out the protected/public characteristics, in the precedence dictated by the @project annotation sequence, into actual source code or into byte-code. Preferably into byte-code. So that the setFootPrint method is found in both Machine and Guzzler, only that of Machine's would be projected. Questions: Don't you think this is a good idea to have such a framework? Does such a framework already exist? Tell me where/what. Is there an eclipse plugin that does it? Is there a proposal or plan anywhere that you know about such an annotation framework? It would be wonderful too, if the annotation/plugin framework lets me specify that boolean, int, or whatever else needs to be converted from String and does the conversion/parsing for me too. Please advise, somebody. I hope wording of my question was clear enough. Thx. Edited: To avoid OO enthusiasts jumping to conclusion, I have renamed the title of this question.

    Read the article

  • How do I tie a cmbBox that selects all drives (local and network) into a treeNode VB

    - by jpavlov
    How do i tie in a selected item from a cmbBox with a treeView? I am looking to just obtain the value of the one selected drive Thanks. Imports System Imports System.IO Imports System.IO.File Imports System.Windows.Forms Public Class F_Treeview_Demo Private Sub F_Treeview_Demo_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load ' Initialize the local directory treeview Dim nodeText As String = "" Dim sb As New C_StringBuilder With My.Computer.FileSystem 'Read in the number of drives For i As Integer = 0 To .Drives.Count - 1 '** Build the drive's node text sb.ClearText() sb.AppendText(.Drives(i).Name) cmbDrives.Items.Add(sb.FullText) Next End With ListRootNodes() End Sub Private Sub btnExit_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnExit.Click Application.Exit() End Sub Private Sub tvwLocalFolders_AfterSelect(ByVal sender As Object, ByVal e As System.Windows.Forms.TreeViewEventArgs) _ Handles tvwLocalFolders.AfterSelect ' Display the path for the selected node Dim folder As String = tvwLocalFolders.SelectedNode.Tag lblLocalPath.Text = folder ListView1.Items.Clear() Dim childNode As TreeNode = e.Node.FirstNode Dim parentPath As String = AddChar(e.Node.Tag) End Sub Private Sub AddToList(ByVal nodes As TreeNodeCollection) For Each node As TreeNode In nodes If node.Checked Then ListView1.Items.Add(node.Text) ListView1.Items.Add(Chr(13)) AddToList(node.Nodes) End If Next End Sub Private Sub tvwLocalFolders_BeforeExpand(ByVal sender As Object, ByVal e As System.Windows.Forms.TreeViewCancelEventArgs) _ Handles tvwLocalFolders.BeforeExpand ' Display the path for the selected node lblLocalPath.Text = e.Node.Tag ' Populate all child nodes below the selected node Dim parentPath As String = AddChar(e.Node.Tag) tvwLocalFolders.BeginUpdate() Dim childNode As TreeNode = e.Node.FirstNode 'this i added Dim smallNode As TreeNode = e.Node.FirstNode Do While childNode IsNot Nothing ListLocalSubFolders(childNode, parentPath & childNode.Text) childNode = childNode.NextNode ''this i added ListLocalFiles(smallNode, parentPath & smallNode.Text) Loop tvwLocalFolders.EndUpdate() tvwLocalFolders.Refresh() ' Select the node being expanded tvwLocalFolders.SelectedNode = e.Node ListView1.Items.Clear() AddToList(tvwLocalFolders.Nodes) ListView1.Items.Add(Environment.NewLine) End Sub Private Sub ListRootNodes() ' Add all local drives to the Local treeview Dim nodeText As String = "" Dim sb As New C_StringBuilder With My.Computer.FileSystem For i As Integer = 0 To .Drives.Count - 1 '** Build the drive's node text sb.ClearText() sb.AppendText(.Drives(i).Name) nodeText = sb.FullText nodeText = Me.cmbDrives.SelectedItem '** Add the drive to the treeview Dim driveNode As TreeNode driveNode = tvwLocalFolders.Nodes.Add(nodeText) 'driveNode.Tag = .Drives(i).Name '** Add the next level of subfolders 'ListLocalSubFolders(driveNode, .Drives(i).Name) ListLocalSubFolders(driveNode, nodeText) 'driveNode = Nothing Next End With End Sub Private Sub ListLocalFiles(ByVal ParentNode As TreeNode, ByVal PParentPath As String) Dim FileNode As String = "" Try For Each FileNode In Directory.GetFiles(PParentPath) Dim smallNode As TreeNode smallNode = ParentNode.Nodes.Add(FilenameFromPath(FileNode)) With smallNode .ImageIndex = 0 .SelectedImageIndex = 1 .Tag = FileNode End With smallNode = Nothing Next Catch ex As Exception End Try End Sub Private Sub ListLocalSubFolders(ByVal ParentNode As TreeNode, _ ByVal ParentPath As String) ' Add all local subfolders below the passed Local treeview node Dim FolderNode As String = "" Try For Each FolderNode In Directory.GetDirectories(ParentPath) Dim childNode As TreeNode childNode = ParentNode.Nodes.Add(FilenameFromPath(FolderNode)) With childNode .ImageIndex = 0 .SelectedImageIndex = 1 .Tag = FolderNode End With childNode = Nothing Next Catch ex As Exception End Try End Sub Private Sub ComboBox1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmbDrives.SelectedIndexChanged End Sub Private Sub lblLocalPath_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles lblLocalPath.Click End Sub Private Sub grpLocalFileSystem_Enter(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles grpLocalFileSystem.Enter End Sub Private Sub btn1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btn1.Click ' lbl1.Text = End Sub Private Sub ListView1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ListView1.SelectedIndexChanged End Sub End Class

    Read the article

  • how to access(read/write) local file system from webkit/javascript?

    - by ganapati hegde
    Hi, i am using Webkitgtk for rendering my HTML pages.Now,say i am browsing the page, i select some text while reading,i want to save/write down the selected text on my local file say /home/localfile.txt. Is any way to access(read/write) local file system using webkit? In case of firefox, i can do like below. try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); } catch (e) { alert("Permission to save file was denied."); } var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath( "/home/localfile.txt" ); if ( file.exists() == false ) { alert( "Creating file... " ); file.create( Components.interfaces.nsIFile.NORMAL_FILE_TYPE, 420 ); } var outputStream = Components.classes["@mozilla.org/network/file-output-stream;1"] .createInstance( Components.interfaces.nsIFileOutputStream ); outputStream.init( file, 0x04 | 0x08 | 0x20, 420, 0 ); var output = "my data to be written to the local file"; var result = outputStream.write( output, output.length ); outputStream.close(); The above code is written in a javascript file and the js file is linked to the html file.When some text is selected,the js function will be called and action will be taken place.How can i achieve the same result using Webkit? Thanks...

    Read the article

  • Which CPU for SQL Server machine (Xeon, i5, i7, AMD Phenom)?

    - by Tony_Henrich
    I am going to build a full height server machine to be used for SQL Server 2008 64bit. I have $400 to spend for a CPU. Which CPU should I get among i5, i7, Xeon and Phenom in terms of performance. There are so many options and I am out of touch with the latest stuff. All I know I want something fast and works with DDR3 fast memory and works with some kind of fast system bus. I don't care about overclocking, 3D & gfx benchmarks. The machine is not used for games and gfx apps. Any recommendations?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >