Search Results

Search found 16135 results on 646 pages for 'quick launch bar'.

Page 459/646 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Why can’t two programs access my webcam simultaneously?

    - by qdii
    I first launch cheese and my webcam turns on. I then run vlc to grab the output of /dev/video0 but it fails with: [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3eb4000b78] main input error: open of `v4l2:///dev/video0' failed Whatever pair of video programs I run (skype, cheese, vlc, etc.), the result is always the same: the second program can no longer use the webcam when the first one has already grabbed the output. However I find it curious as video4linux states: In general, V4L2 devices can be opened more than once. When this is supported by the driver, users can for example start a "panel" application to change controls like brightness or audio volume, while another application captures video and audio. My webcam is seen in lspci as 058f:a014 Alcor Micro Corp. Asus Integrated Webcam, but I don’t even know what the underlying driver is, so I can’t check whether my problem is driver-related or not. Any input would be more than welcome!

    Read the article

  • unlinked libraries in a makefile?

    - by wyatt
    I'm trying to install a libspopc, but when I run the make I get the following output: cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c session.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c queries.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c parsing.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c format.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c objects.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c libspopc.c rm -f libspopc*.a ar r libspopc-0.9n.a session.o queries.o parsing.o format.o objects.o libspopc.o ar: creating libspopc-0.9n.a ranlib libspopc-0.9n.a ln -s libspopc-0.9n.a libspopc.a rm -f libspopc*.so cc -o libspopc-0.9n.so -shared session.o queries.o parsing.o format.o objects.o libspopc.o ln -s libspopc-0.9n.so libspopc.so cc -o poptest1 -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL examples/poptest1.c -L. -lspopc -lssl -lcrypto /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_globallookup': dso_dlfcn.c:(.text+0x2d): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x43): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x4d): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_pathbyaddr': dso_dlfcn.c:(.text+0x8f): undefined reference to `dladdr' dso_dlfcn.c:(.text+0xe9): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_func': dso_dlfcn.c:(.text+0x491): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x570): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_var': dso_dlfcn.c:(.text+0x5f1): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x6d0): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_unload': dso_dlfcn.c:(.text+0x735): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_load': dso_dlfcn.c:(.text+0x817): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x88e): undefined reference to `dlclose' dso_dlfcn.c:(.text+0x8d5): undefined reference to `dlerror' collect2: ld returned 1 exit status make: *** [poptest1] Error 1 A quick search suggested that this was due to libdl being unlinked, though this seems unlikely in a distributed library, particularly a seemingly relatively popular one. Could anything else be causing this? And if it is due to an unlinked library, how would I go about fixing it? Thanks

    Read the article

  • Crash dump analysis

    - by Ryan Ries
    I hope this isn't a stupid question, and if it is, then I want to at least get it over with so I don't feel so dumb in the future. Here we are, loading up a Windows crash dump with Windbg. Here are the first few lines of the debugger output: 0: kd> .dumpdebug ----- 64 bit Kernel Summary Dump Analysis DUMP_HEADER64: MajorVersion 0000000f MinorVersion 00001db1 ... The MinorVersion I mostly understand. It's hexadecimal and it translates to 7601 in decimal. Windows admins would already be able to tell from that that this must be either a Win7 x64 machine or a 2k8 R2 machine with SP1. But isn't 7601 the build number? It's supposed to be Major.Minor.Build/Revision... right? Also I don't understand the MajorVersion. It should be 6. This version of Windows is 6. But isn't 0000000f in hexadecimal 15 in decimal? The full version string of this version of Windows, when you launch the Command Prompt for instance, is 6.1.7601. If 7601 is the MinorVersion, then what is 1 and what is 6? And why does the crash dump say 0F?

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • Windows 7: Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly? Thanks!

    Read the article

  • Can't install Visual Studio 2010 SP1 from an .ISO file I downloaded. Error inside

    - by Sergio
    This is the error: [Window Title] C:\Users\Sergio\Desktop\Things\Setup.exe [Content] The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. [OK] I'm running Windows 7 (64bit) Ultimate and have installed this service pack before (2 days ago) on another machine with similar specs and the same exact OS software. I've tried mounting the .ISO file to a virtual drive and installing from there and I get that error. I've tried mounting the .ISO and copy pasting the files to a local folder on my drive and then running the setup.exe application, and I get that error. I don't know how to proceed but can provide any additional information you require from me. What can I do to fix this? Edit If I right click Setup.exe and Run As Administrator, I get the following error: [Window Title] C:\Users\Sergio\Desktop\Things\Setup.exe [Content] Windows cannot find 'C:\Users\Sergio\Desktop\Things\Setup.exe'. Make sure you typed the name correctly, and then try again. [OK] I've already tried re downloading the ISO from the site, but a quick check of the bytes of the file assures me that the ISO on my drive is 100% correctly downloaded. I get the same amount of bytes in size from the downloading ISO (as Opera reports).

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • red5 Install on Ubuntu 10.04? Problems with libslf4j

    - by mrgordon
    I've been trying for many days to get Red5 to install on Ubuntu 10.04. I finally managed to get red5.sh to stop hanging a few seconds in but now I'm getting the following error: Setting default logging context: default Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.red5.server.Bootstrap.bootStrap(Bootstrap.java:135) at org.red5.server.Bootstrap.main(Bootstrap.java:50) Caused by: java.lang.NoSuchMethodError: org.slf4j.impl.StaticLoggerBinder.getContextSelector()Lch/qos/logback/classic/selector/ContextSelector; at org.red5.logging.Red5LoggerFactory.getLogger(Red5LoggerFactory.java:121) at org.red5.logging.Red5LoggerFactory.getLogger(Red5LoggerFactory.java:108) at org.red5.server.Launcher.launch(Launcher.java:51) ... 6 more I suspected that this had to do with slf4j not being installed or on my classpath. I installed logback and libslf4j-java from aptitude and I see related files in my red5 lib directories. For example: /usr/share/red5/lib/slf4j-api-1.6.1.jar /usr/share/red5/lib/log4j-over-slf4j-1.6.1.jar /usr/share/red5/lib/logback-classic-0.9.26.jar /usr/share/red5/lib/logback-core-0.9.26.jar /usr/share/red5/lib/jcl-over-slf4j-1.6.1.jar /usr/share/red5/lib/jul-to-slf4j-1.6.1.jar And I set my classpath to /usr/share/red5/lib/ Any ideas on where to proceed from here? There seem to be a lot of people having trouble getting 10.04 and red5 0.9 working together. I've tried red5-0.9.1.tar.gz and red5_0.9.0-RC1_all.deb. The libraries above should be all that are needed according to Red5's documentation and I got the latest version of each.

    Read the article

  • Hudson Mercurial checkout throws exception on Debian

    - by Jack
    I'm trying to configure Hudson to checkout my site's sources from Mercurial but it throws an exception. The /var/lib/hudson/jobs/jobname directory does exist, and I can create a workspace directory in there (even after su hudson), but as soon as I run the Hudson job again this directory disappears and the job ends with the same error: java.io.IOException: Cannot run program "hg" (in directory "/var/lib/hudson/jobs/jobname/workspace"): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.<init>(Proc.java:192) at hudson.Proc$LocalProc.<init>(Proc.java:164) at hudson.Launcher$LocalLauncher.launch(Launcher.java:639) at hudson.Launcher$ProcStarter.start(Launcher.java:274) at hudson.Launcher$ProcStarter.join(Launcher.java:281) at hudson.plugins.mercurial.MercurialSCM.joinWithPossibleTimeout(MercurialSCM.java:298) at hudson.plugins.mercurial.HgExe.popen(HgExe.java:191) at hudson.plugins.mercurial.HgExe.tip(HgExe.java:171) at hudson.plugins.mercurial.MercurialSCM.calcRevisionsFromBuild(MercurialSCM.java:254) at hudson.scm.SCM._calcRevisionsFromBuild(SCM.java:304) at hudson.model.AbstractProject.calcPollingBaseline(AbstractProject.java:1183) at hudson.model.AbstractProject.checkout(AbstractProject.java:1172) at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:499) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:415) at hudson.model.Run.run(Run.java:1362) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) Running on Debian 6.0.1 I wonder if anyone has ran into this before, and hopefully solved it?

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • yum install php-devel amongst other commands returning problems

    - by user3791722
    I run yum install php-devel and it returns this. Typically I'd just run it with --skip-broken, but when I do, it still doesn't do the trick. Available: php-common-5.3.3-22.el6.x86_64 (rhel-x86_64-server-6) php-common(x86-64) = 5.3.3-22.el6 Available: php-common-5.3.3-23.el6_4.x86_64 (rhel-x86_64-server-6) php-common(x86-64) = 5.3.3-23.el6_4 Available: php-common-5.3.3-26.el6.x86_64 (rhel-x86_64-server-6) php-common(x86-64) = 5.3.3-26.el6 Available: php54w-common-5.4.29-2.w6.x86_64 (webtatic) php-common(x86-64) = 5.4.29-2.w6 Available: php54w-common-5.4.30-1.w6.x86_64 (webtatic) php-common(x86-64) = 5.4.30-1.w6 Available: php55w-common-5.5.13-2.w6.x86_64 (webtatic) php-common(x86-64) = 5.5.13-2.w6 Installing: php55w-common-5.5.14-1.w6.x86_64 (webtatic) php-common(x86-64) = 5.5.14-1.w6 You could try using --skip-broken to work around the problem When run with --skip-broken it returns this at the end: Packages skipped because of dependency problems: autoconf-2.63-5.1.el6.noarch from rhel-x86_64-server-6 automake-1.11.1-4.el6.noarch from rhel-x86_64-server-6 pcre-devel-7.8-6.el6.x86_64 from rhel-x86_64-server-6 php-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-cli-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-common-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-mysql-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-pdo-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-soap-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php55w-cli-5.5.14-1.w6.x86_64 from webtatic php55w-common-5.5.14-1.w6.x86_64 from webtatic php55w-devel-5.5.14-1.w6.x86_64 from webtatic This problem has arisen with a few other similar commands when installing something related to php, except I've just done without them. I need to install this for something I'm trying to do. I do remember upgrading to PHP 5.4 and our entire infrastructure coming down due to it requiring PHP 5.3, so I downgraded as quick as possible to get everything back running and that may contribute to the issue. If you have any idea why this is happening and how I could get the package on the system while remaining on PHP 5.3, please let me know. Thanks.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • VMWare Fusion: "No Permission to access this virtual machine"

    - by Craig Walker
    I had a VMWare Fusion VM backed up on my home network file server (Ubuntu). I wanted to run it again, so I copied it back to my Macbook. When I tried to launch it in VMWare, I got an error message: No permission to access this virtual machine. Configuration file: /Users/craig/WinXP Clean + Scanner.vmwarevm/WinXP Pro Test.vmx The permissions look fine to me: The bundle directory is 777 The bundle files (including the listed .vmx) are all 666 User is craig (my current user); group is staff. I changed the group to wheel at the suggestion of this page, but that didn't help. Finder shows read & write for craig, staff, and everyone on the bundle directory The bundle dir is also not locked Finder also shows rw and unlocked for the .vmx file The parent directory is also rw & unlocked Disk Utility permissions check doesn't show any problems with any of the associated files It sure looks like I should have wide open access to run this VM; why is Fusion complaining?

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • problem booting crusty old windows XP

    - by Carson Myers
    I have an acer aspire laptop running Windows XP home. I believe I have some virus on it, I'm not sure--I mostly just run linux in a VM on it so I wasn't too worried. I'm not sure if that virus caused this problem. The laptop wasn't recognizing my USB hard drive for some reason so I decided to restart it. When it started up, it got past the memory test, past the boot screen, (but it paused right here on a blank screen for awhile) and flashed the desktop once (like it does just before the login screen) and then crashed. I got a quick BSOD and then it restarted. Then it tried to boot again, etc etc infinite loop of failure. Well, before trying safe mode, I disabled automatic restart on system crash so I could read the blue screen. There wasn't anything important on it, it said *** STOP: 0x00000000 (0xC0000000 0x,.... ) beginning physical memory dump physical memory dump complete That's not verbatim (obviously) but it didn't help me. so I booted in safe mode, and it stopped on the driver gagp30kx.sys and then restarted (and infinite loop of failure again). I burned a recovery CD and tried that. It loaded it, and I went into repair mode. I ran chkdsk and then disabled the AGP driver. Same thing on booting in safe mode except it stopped at mup.sys instead. I enabled the AGP driver again, and ran chkdsk again from the CD. It said it found problems but didn't say it fixed them. So I ran it a second time, and it said "performing additional checking or recovery" lots of times (I can't tell how many, they went above the screen top). I tried booting again and no luck. Every time I run chkdsk after trying to boot again it says it found and fixed more errors. I think it might be whatever driver is after the AGP driver, but I don't know what it is or how to find out. Can anyone help me fix this?

    Read the article

  • How powerful of a PC do you need to edit HD videos?

    - by Xeoncross
    I have a Core2Quad Q8200 (2.3GHz) with 4GB of RAM, a 512MB PCIe video card, and a SATA-2 HD. Yet it still isn't fast enough to edit 720i/p video in Sony Vegas or Adobe Premiere/Aftereffects. My RAM usage never peaks over 1.6GB, but my CPU cores make it to 95% quick! Right now the preview panes in all these programs lag to bad to actually work on the videos. I get to see 1-3 frames every second or two! So how fast do I have to go? At what point will my CPU be fast enough to actually edit these videos? I have to assume that regular people and their regular sub $2k computers can actually work with this footage. Another way to answer this is, how fast is the PC you used to edit videos? Update: I'ts worth noting that now that I have Adobe Pre/AF CS4 I am more interested in getting that working than my older Vegas 6. If you didn't have to re-run RAM preview every, single, time you made one change it would be my answer. But since I like to test many filters and effects before choosing one - I have to re-render a 1-sec section of footage over-and-over and it drives me nuts waiting. Perhaps a motherboard with Dual Xeon chips or something would be able to handle this. It would probably be as much as a dual-crossfire setup and would also speed up other applications.

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • How to make DD-WRT router's (configured like a repeater) devices be accessible on LAN? (i.e. integrate DHCP for both routers)

    - by Annonomus Penguin
    I have a D-Link DIR-600-A1 router running DD-WRT (using the 601's firmware: except for the model number, they are near identical). It has an Atheros chip, so there is no "repeater" option. You can bypass this by setting the main radio as a client to the main router, and adding a virtual radio configured as an AP. You can then set up the credentials for connecting to the main router and allowing devices to connect to the repeater/router. I have a few devices on my network: Ethernet computers Server with Samba running WiFi devices connected to the main router I then wanted to add a repeater. I have a couple of other things on the repeater: WiFi Computer Other WiFi devices. Anyway, I wanted to connect my WiFi computer to the share on my server via Samba. However, for some reason, my router treats the main router as WAN, not another device. I've tried disabling the SPI firewall: However, that doesn't work. I've tried pinging my WiFi computer from my server. However, I can ping my server from my WiFi computer. AFAIK, they are on the same subset, just using different IPs: the main one uses 192.168.0.x and the repeater uses 192.168.1.x (starting at 100 for some reason). It seems as I need to configure my router(s) to work together for DHCP. I noticed there was a "DHCP forwarder" option, but I have no idea what that would do. A quick note: for some reason (that's beyond me) my ISP disabled the capability to bridge a WiFi to ethernet connection with the router they provide (something about PPPoE or similar...). The service rep I talked to when I was having issues after I changed ISPs said that, but they couldn't explain exactly what they were "blocking." How can I get DD-WRT to not treat the client connection as WAN and the router to recognize the devices connected to the repeater?

    Read the article

  • Cannot start listening on a certain TCP port, but there's nothing currently listening on it

    - by John Rasch
    I have Windows Service that uses a WCF service host to listen for connections on TCP port 61000. When I try to start the service, I get the error: Service cannot be started. System.ServiceModel.AddressAlreadyInUseException: HTTP could not register URL http://+:61000/ because TCP port 61000 is being used by another application. ---> System.Net.HttpListenerException: The process cannot access the file because it is being used by another process at System.Net.HttpListener.AddAll() at System.Net.HttpListener.Start() at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() --- End of inner exception stack trace --- at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener) at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback) at System.ServiceModel.Channels.HttpChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout) at... A quick netstat -a shows there is nothing listening on port 61000. I've also found several posts online that mention reserving namespaces using netstat, but the account that the service runs under has administrator privileges so that shouldn't be necessary. Any other ideas as to why I'm getting this message? This service is running on 64-bit Windows Server 2008 R2 Standard.

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >