Search Results

Search found 10022 results on 401 pages for 'platform games'.

Page 336/401 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Inkscape: what are "line" objects?

    - by Peter Mortensen
    What is a "line" object in Inkscape? Drawing lines in Inkscape is by using the tool "Draw Bezier curves and straight lines (Shift+F6)". This creates objects of another type, "path". Using Inkscape: is there a way to convert an object of type "line" into an object of the more general type "path"? I have imported a drawing (mostly lines, rectangles and text) that has been through Adobe Illustrator: originally made in Inkscape, imported into Illustrator, edited, saved from Illustrator as SVG, imported into Inkscape. Sample from the imported SVG file: <path id="path5855" stroke="#000000" d=" M320.198,275.935" /> <line fill="none" stroke="#000000" x1="348.553" y1="45.097" x2="348.553" y2="185.346" id="line3368" /> Update 1: I have inspected the original XML (SVG) file from 2006 and it does not contain any "line" XML tags. Thus it must be a crime of Adobe Illustrator. When a line is selected in this imported SVG file the bottom panel displays: "Line in root. Click selection to toggle scale/rotation handles.". When a line is selected that was drawn in Inkscape the bottom panel displays: "Path (2 nodes) in Layer 1. Click selection to toggle scale/rotation handles." What is the difference between "line" and "path"? Is "line" some kind of read-only/non-editable object? A generic term like "line" is not easy to use in search, but I have now found the definitions for "line" and "path": SVG line: http://www.w3schools.com/svg/svg_line.asp SVG path: http://www.w3schools.com/svg/svg_path.asp Platform: Inkscape v0.46 (2008-03-10), Windows XP 64 bit, 8 GB RAM.

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Enabling syntax highlighting for LESS in Programmer's Notepad?

    - by Cody Gray
    When I don't feel like firing up the Visual Studio behemoth, or when I don't have it installed, I always turn to Programmer's Notepad. It's an amazingly light and fast little text editor, with the special advantage that it is completely platform-native and conforms to standard UI conventions. Therefore, please do not suggest that I consider using other text editors. I've already considered and rejected them because they do not use native UI controls. I like Programmer's Notepad, thank you very much. Unfortunately, I've recently begun to learn, use, and love LESS for all of my CSS coding needs, and it appears that Programmer's Notepad is not bundled with a syntax highlighting scheme for LESS. Does anyone know if there is—by chance and good fortune—one already available somewhere on the web that some kind soul has tediously prepared? If not, how can I go about writing one of my own? Is there a way to build on the existing CSS scheme? It's also possible that any code coloring scheme designed for Scintilla-based editors will work, as Programmer's Notepad is based on the Scintilla control. If you know of a LESS highlighting scheme for Scintilla-based editors, and how to use that with Programmer's Notepad, please suggest that as well.

    Read the article

  • apache vhost not working consistently

    - by petrus
    I have a vhost on my webserver whose sole and unique goal is to return the client IP adress: petrus@bzn:~$ cat /home/vhosts/domain.org/index.php <?php echo $_SERVER['REMOTE_ADDR']; echo "\n" ?> This helps me troubleshoot networking issues, especially when NAT is involved. As such, I don't always have domain name resolution and this service needs to work even if queried by its IP address. I'm using it this way: petrus@hive:~$ echo "GET /" | nc 88.191.124.41 80 191.51.4.55 petrus@hive:~$ echo "GET /" | nc domain.org 80 191.51.4.55 router#more http://88.191.124.41/index.php 88.191.124.254 However I found that it wasn't working from at least a computer: petrus@seth:~$ echo "GET /" | nc domain.org 80 petrus@seth:~$ petrus@seth:~$ echo "GET /" | nc 88.191.124.41 80 petrus@seth:~$ What I checked: This is not related to ipv6: petrus@seth:~$ echo "GET /" | nc -4 ydct.org 80 petrus@seth:~$ petrus@hive:~$ echo "GET /" | nc ydct.org 80 2a01:e35:ee8c:180:21c:77ff:fe30:9e36 netcat version is the same (except platform, i386 vs x64): petrus@seth:~$ type nc nc est haché (/bin/nc) petrus@seth:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@seth:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2010-06-26 14:01 /etc/alternatives/nc -> /bin/nc.openbsd petrus@hive:~$ type nc nc est haché (/bin/nc) petrus@hive:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@hive:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2011-05-26 01:23 /etc/alternatives/nc -> /bin/nc.openbsd It works when used without the pipe: petrus@seth:~$ nc domain.org 80 GET / 2a01:e35:ee8c:180:221:85ff:fe96:e485 And the piping works at least with a test service (netcat listening on 1234/tcp and output to stdout) petrus@bzn:~$ nc -l -p 1234 GET / petrus@bzn:~$ petrus@seth:~$ echo "GET /" | nc domain.org 1234 petrus@seth:~$ I don't know if this issue is more related to netcat or Apache, but I'd appreciate any pointers to troubleshoot this issue ! The IP addresses have been modified but kept consistent for easy reading. bzn is the server, hive is a working client and seth is the client on which I have the issue.

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • How to do 'search for keyword in files' in emacs in Windows without cygwin?

    - by Anthony Kong
    I want to search for keyword, says 'action', in a bunch of files in my Windows PC with Emacs. It is partly because I want to learn more advanced features of emacs. It is also because the Windows PC is locked down by company policy. I cannot install useful applications like cygwin at will. So I tried this command: M-x rgrep It throws the following error message: *- mode: grep; default-directory: "c:/Users/me/Desktop/Project" -*- Grep started at Wed Oct 16 18:37:43 find . -type d "(" -path "*/SCCS" -o -path "*/RCS" -o -path "*/CVS" -o -path "*/MCVS" -o -path "*/.svn" -o -path "*/.git" -o -path "*/.hg" -o -path "*/.bzr" -o -path "*/_MTN" -o -path "*/_darcs" -o -path "*/{arch}" ")" -prune -o "(" -name ".#*" -o -name "*.o" -o -name "*~" -o -name "*.bin" -o -name "*.bak" -o -name "*.obj" -o -name "*.map" -o -name "*.ico" -o -name "*.pif" -o -name "*.lnk" -o -name "*.a" -o -name "*.ln" -o -name "*.blg" -o -name "*.bbl" -o -name "*.dll" -o -name "*.drv" -o -name "*.vxd" -o -name "*.386" -o -name "*.elc" -o -name "*.lof" -o -name "*.glo" -o -name "*.idx" -o -name "*.lot" -o -name "*.fmt" -o -name "*.tfm" -o -name "*.class" -o -name "*.fas" -o -name "*.lib" -o -name "*.mem" -o -name "*.x86f" -o -name "*.sparcf" -o -name "*.dfsl" -o -name "*.pfsl" -o -name "*.d64fsl" -o -name "*.p64fsl" -o -name "*.lx64fsl" -o -name "*.lx32fsl" -o -name "*.dx64fsl" -o -name "*.dx32fsl" -o -name "*.fx64fsl" -o -name "*.fx32fsl" -o -name "*.sx64fsl" -o -name "*.sx32fsl" -o -name "*.wx64fsl" -o -name "*.wx32fsl" -o -name "*.fasl" -o -name "*.ufsl" -o -name "*.fsl" -o -name "*.dxl" -o -name "*.lo" -o -name "*.la" -o -name "*.gmo" -o -name "*.mo" -o -name "*.toc" -o -name "*.aux" -o -name "*.cp" -o -name "*.fn" -o -name "*.ky" -o -name "*.pg" -o -name "*.tp" -o -name "*.vr" -o -name "*.cps" -o -name "*.fns" -o -name "*.kys" -o -name "*.pgs" -o -name "*.tps" -o -name "*.vrs" -o -name "*.pyc" -o -name "*.pyo" ")" -prune -o -type f "(" -iname "*.sh" ")" -exec grep -i -n "action" {} NUL ";" FIND: Parameter format not correct Grep exited abnormally with code 2 at Wed Oct 16 18:37:44 I believe rgrep tried to spwan a process and called 'FIND' with all the parameters. However, since it is a Windows, the default Find executable simply does not know how to handle. What is the better way to search for a keyword in multiple files in Emacs on Windows platform, without any dependency on external programs? Emacs version: 24.2.1

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • Web Deploy 3.0 Installation Fails

    - by jkarpilo
    I am having difficulty installing Microsoft Web Deploy 3.0 to a Windows Server 2008 R2 box. I have tried installing with both the Web Platform Installer and the MSI package but installation fails while trying to execute the MSI custom action ExecuteRegisterUIModuleCA. This server is a VM and a member of a farm but shared config is disabled while I'm installing. Here's the point at which it fails in the MSI log (starting at line 1875): MSI (s) (80:FC) [15:29:01:358]: Executing op: ActionStart(Name=IISBeginTransactionCA,,) MSI (s) (80:FC) [15:29:01:374]: Executing op: CustomActionSchedule(Action=IISBeginTransactionCA,ActionType=3073,Source=BinaryData,Target=IISBeginTransactionCA,) MSI (s) (80:A8) [15:29:01:374]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6C6A.tmp, Entrypoint: IISBeginTransactionCA MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISRollbackTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISRollbackTransactionCA,ActionType=3329,Source=BinaryData,Target=IISRollbackTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISCommitTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISCommitTransactionCA,ActionType=3585,Source=BinaryData,Target=IISCommitTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISExecuteCA,,) MSI (s) (80:FC) [15:29:01:452]: Executing op: CustomActionSchedule(Action=IISExecuteCA,ActionType=3073,Source=BinaryData,Target=IISExecuteCA,CustomActionData=1^3^21^WebDeployment_Current^154^Microsoft.Web.Deployment.UI.PackagingModuleProvider, Microsoft.Web.Deployment.UI.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^3^28^DelegationManagement_Current^171^Microsoft.Web.Management.Delegation.DelegationModuleProvider, Microsoft.Web.Management.Delegation.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^7^38^system.webServer/management/delegation^4^Deny^16^MachineToWebRoot^0^^3^yes^1^7^31^system.webServer/wdeploy/backup^4^Deny^20^MachineToApplication^0^^2^no^) MSI (s) (80:84) [15:29:01:452]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6CB9.tmp, Entrypoint: IISExecuteCA 1: IISCA IISExecuteCA : Begin CA Setup 1: IISCA IISExecuteCA : CA 'ExecuteRegisterUIModuleCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : CA 'IISExecuteCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : End CA Setup CustomAction IISExecuteCA returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 15:29:05: InstallFinalize. Return value 3. I can't seem to find any information regarding this particular issue; can someone help point me in the right direction?

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • Slow VM on esxi 4.1

    - by user57432
    We have a FreeBSD 64bit running on a esxi 4.1, the hardware platform is a DELL R710 with 2 x 56xx (intel 6core cpu) and 48 GB ram. The FreeBSD vm is very slow, when we compiles/builds something on it, it takes 5 minuts and it says "build time 18 seconds.". There's no vmtools installed on the vm. The same vm is installaed on another R710 running esxi 4.0 for dell and there's no problems with that one. Does anyone have any idea about what to look for? the VMs on the second server (ESXi 4.1) is a clone of the VMs running on the first VMserver (ESXi 4.0 Dell edition). It's not possible for me to move the VM back to the first server since the file contaning the vm is too big. We installed the new esxi with a datasore with 8mb blocks because 1mb blocks dident allow for the file size we needed. It looks like the www server on the new ESXi 4.1 works fine, but I havent really tested it. There's not installed vmtools on any of the VMs (FreeBSD). The block size on the second VM (ESXi 4.1) datastorage is 8mb and 1mb on the first (ESXi 4.0)

    Read the article

  • Is it necessary to burn-in RAM for server-class systems?

    - by ewwhite
    When using server-class systems with ECC RAM, is it necessary or even useful to burn-in the memory DIMMs prior to deployment? I've encountered an environment where all server RAM is placed through a lengthy multi-day burn-in/stress-tesing process. This has delayed system deployments on occasion and adds an extra step to the hardware lead-time. The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant. Is this process useful? In my past experience, I simply used vendor RAM out of the box. Isn't that what the POST memory tests are for? I've encountered and responded to ECC errors long before a DIMM actually failed. The ECC thresholds were usually the trigger for warranty placement. Do you burn your RAM in? If so, what method do you use to perform the tests? Has the burn-in process resulted in any additional platform stability? Has it identified any pre-deployment problems?

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • Birt on Tomcat unable to find JARs

    - by LostInTheWoods
    First, my setup: BiRT Runtime: 3.7.2. Ubuntu 10.04 Tomcat 6 Sun Java 1.6.0 I have a jar file I want to deploy onto the Tomcat server so it is usable by the runtime, so I placed the jar file in /var/lib/tomcat6/webapps/birt/WEB-INF/lib. As I understand it this is the default location for JAR files that are going to be used by a BiRT report. But the jar file is not accessible by the report that is trying to call it. In the BiRT logs I see: Error evaluating Javascript expression. Script engine error: ReferenceError: "DynDSinfo" is not defined. (/report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"]#20) Script source: /report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"], line: 0, text: __bm_beforeOpen() org.eclipse.birt.data.engine.core.DataException: Fail to execute script in function __bm_beforeOpen(). Source: "DynDSinfo" is the class I am trying to reference.. and now for the kicker... this works fine on Tomcat6 on Windows 7. The same files in the same places. So is there some additional configuration or some environmental variable that needs to be set, or something different on the Linux (Ubuntu) platform? All help or ideas gratefully received, Stephen

    Read the article

  • Issues with creating USB bootable Mountain Lion

    - by Sidd
    I am trying to set up a triple boot Windows 8, Mountain Lion, and Ubuntu. I am stuck though. I have got Windows 8 on a partition, and I am trying to get Mountain Lion on there at this point. I installed a VMware with a Snow Leopard 10.6.2 image on the Windows 8 platform. I used the disk utility in this program in order to get Mountain Lion on there. This is what i did specifically: I got the installesd.dmg. I 'mounted' that file or whatever you call it, and out came something along the lines of "Install Mountain Lion OS x" (something like that - it was like a submenu under the installesd.dmg in the disk utility). I got my PNY 8 gb Attache Flash Drive and went to the Erase tab of disk utility. I erased it using the Mac OS Extended (Journaled) setting and called it "Mac". I went to the Restore tab, dragged "Mac" into destination, and dragged "Install Mountain Lion OS x" to the source. Everything seemed to go well, but it didn't. When trying to boot from the flash drive (and yes, I set the BIOS correctly), it skipped it, and loaded Windows 8 normally as if nothing was plugged in. When I try looking at the flash drive in windows 8, it comes up as a 200 mb capacity drive labeled "EFI" with nothing in it (remember, it was 8gb in the beginning). I downloaded Plop Boot Manager, but it did not recognize a USB being plugged in. Does anyone know how I could fix this?

    Read the article

  • WIM2VHD failing with "Cannot derive Volume GUID from mount point."

    - by Jacob
    I'm trying to use WIM2VHD according to the instructions on Scott Hanselman's blog post to create a Sysprepped VHD image to boot from. I've installed the WAIK, and I have my Windows 7 sources mounted as a virtual drive. When I try to run WIM2VHD like this: cscript WIM2VHD.wsf /wim:F:\sources\install.wim /sku:Ultimate /vhd:E:\WindowsSeven.vhd /size:30721 I get the following log: Log for WIM2VHD 6.1.7600.0 on 11/2/2009 at 10:51:18.16 Copyright (C) Microsoft Corporation. All rights reserved. MACHINE INFO: Build=7600 Platform=x86fre OS=Windows 7 Ultimate ServicePack= Version=6.1 BuildLab=win7_rtm BuildDate=090713-1255 Language=en-ZA INFO: Looking for IMAGEX.EXE... INFO: Looking for BCDBOOT.EXE... INFO: Looking for BCDEDIT.EXE... INFO: Looking for REG.EXE... INFO: Looking for DISKPART.EXE... INFO: Session key is E01E1ED7-C197-4814-BDE4-43B73E14FCC4 INFO: Inspecting the WIM... INFO: Configuring and formatting the VHD... ******************************************************************************* Error: 0: Cannot derive Volume GUID from mount point. ******************************************************************************* INFO: Unmounting the VHD due to error... WARNING: In order to help resolve the issue, temporary files have not been deleted. They are in: C:\Users\Jacob\AppData\Local\Temp\WIM2VHD.WSF\E01E1ED7-C197-4814-BDE4-43B73E14FCC4 *emphasized text*Summary: Errors: 1, Warnings: 1, Successes: 0 INFO: Done. Any ideas?

    Read the article

  • How can I stream audio signals from various devices/computers to my home server?

    - by Breakthrough
    I currently have a headless home server set up (running Ubuntu 12.04 server edition) running a simple Apache HTTP server. The server is near an audio receiver, which controls a set of indoor and outdoor speakers in my home. Recently, my father purchased a Bluetooth adapter, which our various laptops and cellphones can connect to, outputting the music to the speakers. I was hoping to find a solution that worked over Wi-Fi, namely because it won't cost anything (I already have a server with an audio card), and it doesn't depend on Bluetooth. Is there any cross-platform (preferably free and open-source) solution that I can use which will allow me to stream audio to my home server, over my home network, from a wide variety of devices (laptops running Windows/Linux or cellphones running Android/BB/iOS)? I need something that works at least with Windows and Android. Also, just to clairfy, I want something that simply allows devices to connect to my server and output an audio signal without any action on the server end (since it's a server hidden away near my receiver). Any subsequent connection attempt should be dropped, so only one device can be in control of the stereo at once.

    Read the article

  • Process killing trouble

    - by Aditya Singh
    I am trying to program a server software which involves a lot of testing on java / scala platform. Whenever i compile and execute the code. It starts listening on port 80. Sometimes i need to terminate it by Ctrl+C when it hangs. In that case, ubuntu is not freeing the port. So in order to run the process, i have to restart the machine. I see this at ps aux root 1924 0.0 0.0 5796 1660 pts/0 T 05:44 0:00 sudo scala - root 1925 0.2 1.5 491448 40796 pts/0 Tl 05:44 0:03 java -Xmx256M -Xms16M So process 1924 and 1925. I did sudo kill on both these. But then they keep on persisting even after a long time. sudo nmap -T Aggressive -A -v 127.0.0.1 -p 1-65000 Scanning localhost (127.0.0.1) [65000 ports] Discovered open port 80/tcp on 127.0.0.1 It means its still there ! sudo netstat --tcp --udp --listening --program tcp6 0 0 [::]:www [::]:* LISTEN 1925/java tcp6 0 0 ip6-localhost:ipp [::]:* LISTEN 1185/cupsd This means its 1925 - java How to kill it.

    Read the article

  • Finding a backup and synchronization solution

    - by Andrea Zilio
    I'm having difficulties to find a backup and synchronization solution with the following characteristics: Cross-platform: Windows, Linux, Mac Offsite backup (so Internet Backup) Data deduplication Transfer only new/modified bits of modified files Secure: Data encrypted before leaving computer Maintain multiple versions of files (even deleted files) Folder synchronization integrated with backup and across multiple computers connected to the internet (not necessarily in the same LAN) I think that the Folder Sync feature needs a better explanation. The use case is this: you have a desktop pc and a laptop. The desktop pc contains a folder with some files and this folder is part of the backup (so it was selected to be backed up). The laptop does not contain that folder or that files at all. Then you're abroad with your laptop and you need that folder. So you want to be able to open the backup program, select that folder from the backup and download it in your laptop mantaining it synchronized with the backed up version. When you then come back home and switch on your desktop pc you want the folder we're talking about to be updated in the desktop PC. Does anyone knows any service with all these features? I've only found SpiderOak to support all the features I've mentioned but I'm not completely satisfied by the time taken to complete a backup. Sometimes it seems to hang for minutes with no reasons at all and folder synchronization occurs only after all files are backed up (instead folder sync should have a separated queue independent from other backup operations and synchronization should occurs frequently... for example every 5 minutes or less, independently from the frequency of normal backup operations)

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • access an IP restricted service from a dynamic IP (Broadband modem) on a windows machine

    - by Joel Alenchery
    Hi, I dont know if this is the correct place to ask this question but here goes .. (please note that I am pretty much a newbie in terms of networking and I work primarily on the windows platform) I have been working on accessing and consuming some web services in C#/ASP.Net, these web services that I consume are IP restricted. Currently they allow access only from my work network (we have a static ip set up through which all our internet requests are routed). Every now and then we have people who go out and about and are stuck with using a usb dongle based internet connection and hence are not able to now access these web services that they are working on. What I would like to do is to provide some way for these remote workers to access the IP restricted web services using the static ip at our office. For example when the remote worker tries to access a service say http://exampleService.com .. the request gets routed to some box at our office and then out to the actual service. That way the service always sees the static ip of the office and not the dynamic ip that the remote user is actually using. I have done a fair bit of googling and its difficult to search for it as most of the results come back for dynamic DNS which is not really what I am looking for. I have also looked at a couple of posts on here namely Accessing IP restricted server from dynamic IP which does provide some insight but the fellow seems to have access to the source that does the ip restriction and is able to change the restrictions. In my case i dont have that access. another one that looked interesting was Static IP for dynamic IP the first answer seems exactly what I need but I dont know how I would go about doing the same on a windows machine. any help would be really appreciated. (am sorry about being soo noob-ish) PS: Right now everyone is using RDC/LogMeIn to access an internet connected machine in the office to manually check the webservice and getting work done. Which is a very tedious process.

    Read the article

  • access an IP restricted service from a dynamic IP (Broadband modem) on a windows machine

    - by Joel Alenchery
    Hi, I dont know if this is the correct place to ask this question but here goes .. (please note that I am pretty much a newbie in terms of networking and I work primarily on the windows platform) I have been working on accessing and consuming some web services in C#/ASP.Net, these web services that I consume are IP restricted. Currently they allow access only from my work network (we have a static ip set up through which all our internet requests are routed). Every now and then we have people who go out and about and are stuck with using a usb dongle based internet connection and hence are not able to now access these web services that they are working on. What I would like to do is to provide some way for these remote workers to access the IP restricted web services using the static ip at our office. For example when the remote worker tries to access a service say http://exampleService.com .. the request gets routed to some box at our office and then out to the actual service. That way the service always sees the static ip of the office and not the dynamic ip that the remote user is actually using. I have done a fair bit of googling and its difficult to search for it as most of the results come back for dynamic DNS which is not really what I am looking for. I have also looked at a couple of posts on here namely http://serverfault.com/questions/187231/accessing-ip-restricted-server-from-dynamic-ip which does provide some insight but the fellow seems to have access to the source that does the ip restriction and is able to change the restrictions. In my case i dont have that access. another one that looked interesting was http://serverfault.com/questions/136806/static-ip-for-dynamic-ip the first answer seems exactly what I need but I dont know how I would go about on a windows machine. any help would be really appreciated. (am sorry about being soo noob-ish) PS: Right now everyone is using RDC/LogMeIn to access an internet connected machine in the office to manually check the webservice and getting work done. Which is a very tedious process.

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >