Search Results

Search found 14142 results on 566 pages for 'missing symbols'.

Page 383/566 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • How to execute msdb.dbo.sp_start_job from a stored procedure in user database in sql server 2005

    - by Ram
    Hi Everyone, I am trying to execute a msdb.dbo.sp_start_Job from MyDB.dbo.MyStoredProc in order to execute MyJob 1) I Know that if i give the user a SqlAgentUser role he will be able to run the jobs that he owns (BUT THIS IS WHAT I OBSERVED : THE USER WAS ABLE TO START/STOP/RESTART THE SQL AGENT SO I DO NOT WANT TO GO THIS ROUTE) - Let me know if i am wrong , but i do not understand why would such a under privileged user be able to start/stop agents . 2)I know that if i give execute permissions on executing user to msdb.dbo.Sp_Start_job and Enable Ownership chaining or enable Trustworthy on the user database it would work (BUT I DO NOT WANT TO ENABLE OWNERSHIP CHAINING NOR TRUSTWORTHY ON THE USER DATABASE) 3)I this this can be done by code signing User Database i)create a stored proc MyDB.dbo.MyStoredProc ii)Create a certificae job_exec iii)sign MyDB.dbo.MyStoredProc with certificate job_exec iv)export certificate msdb i)Import Certificate ii)create a derived user from this certificate iii)grant authenticate for this derived user iv)grant execute on msdb.dbo.sp_start_job to the derived user v)grant execute on msdb.dbo.sp_start_job to the user executing the MyDB.dbo.MyStoredProc but i tried it and it did not work for me -i dont know which piece i am missing or doing wrong so please provide me with a simple example (with scripts) for executing msdb.dbo.sp_start_job from user stored prod MyDB.dbo.MyStoredProc using code signing Many Many Many Thanks in Advance Thanks Ram

    Read the article

  • Restarting Haproxy Gracefully

    - by Anand Gupta
    As per various blogs, HAproxy can be gracefully restarted using the following command: sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) TO verify this, I had set up a apache bench script which contiguously sent message to haproxy. Ideally, whenever I restarted my server the script should not have an affect on the apache bunch execiton. But, it seems that whenever Haproxy is restarted apache bench scripts terminate and the connection to load balancer is lost. Here is the details of my HaProxy configuration file : global nbproc 4 log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon pidfile /var/run/haproxy.pid stats socket /home/ubuntu/haproxy.sock #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webstats bind 0.0.0.0:1000 stats enable mode http stats uri /lb?stats stats auth anand:aaaaaaaa #stats refresh listen web-farm 0.0.0.0:80 mode http balance roundrobin option httpchk HEAD /index.php HTTP/1.0 server server2.com 1.1.1.1:80 server serve1.com 1.1.1.2:80 ~ Please let me know what am I missing here.

    Read the article

  • Unable to install PHP-FPM on Apache (Failed to connect to FastCGI server)

    - by Nyxynyx
    I have been having problem installing php-fpm for use with apache2-mpm-worker. This is the guide that I am following. According to the guide's Step 5, Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization However I cannot find php5-fcgi at /usr/lib, but only /usr/bin/php5-cgi and /usr/bin/php-cgi, which I am not sure if they are the same. So I changed the lines in Step 5 to: Alias /php5-fcgi /usr/bin/php5-fcgi FastCgiExternalServer /usr/bin/php5-fcgi -host 127.0.0.1:9000 -pass-header On restarting Apache, it's logs gave the errors: [notice] caught SIGTERM, shutting down [alert] (4)Interrupted system call: FastCGI: read() from pipe failed (0) [alert] (4)Interrupted system call: FastCGI: the PM is shutting down, Apache seems to have disappeared - bye [notice] Apache/2.2.22 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations [notice] FastCGI: process manager initialized (pid 16348) And on loading the index page [error] [client 10.0.2.2] (111)Connection refused: FastCGI: failed to connect to server "/usr/bin/php5-cgi": connect() failed [error] [client 10.0.2.2] FastCGI: incomplete headers (0 bytes) received from server "/usr/bin/php5-cgi" [error] [client 10.0.2.2] File does not exist: /var/www/mydomain/public/favicon.ico Question: Any idea why php5-fcgi is missing, and how should this problem be fixed? Thank you!! :)

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Installing SQL Server 2008 on Windows 7 64-bit

    - by harriyott
    I'm having a shocking time trying to install SQL Server 2008 on 64-bit Windows 7. When I run setup.exe, I get the following error message: Microsoft .NET Framework 3.5 installation has failed. SQL Server 2008 Setup requires .NET Framework 3.5 to be installed Things I've tried: I've checked and double checked. I do have .NET Framework 3.5 installed, with SP1 I've read about a missing Windows Installer 4.5 installation producing the same error. Win7 comes with Windows Installer 5, which hopefully satisfies this requirement, as I've tried to install 4.5 and it won't let me Burning the ISO to DVD and installing from there. Installing on an XP machine using the same ISO. This works, so the ISO must be fine. Considering SQL Server 2005, but it really needs to be 2008 for the project. Update Creating a slipstream version gives the same error Update I could install SQL Server Express, and then SP1, but couldn't upgrade to Enterprise. If you've come across this issue, or know how to fix it, I'd love to know.

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Cloudmin KVM DNS hostnames not working

    - by dannymcc
    I have got a new server which has Cloudmin installed. It's working well and I can create and manage VM's as expected. The server came with a /29 subnet and I requested an additional /29 subnet to allow for more virtual machines. I didn't want to replace the existing /29 subnet with a /28 because that would have caused disruption with my existing VM's. To make life easier I decided to configure a domain name for the Cloudmin host server to allow for automatic hostname setup whenever I create a new virtual machine. I have a domain name (example.com) and I have created an NS record as follows: NS kvm.example.com 123.123.123.123 A kvm.example.com 123.123.123.123 In the above example the IP address is that of the host server, I also have two /29 subnets routed to the server. Now, I've added the two subnets to the Cloudmin administration panel as follows: I've tried to hide as little information as possible without giving all of the server details away! If I ping kvm.example.com I get a response from 123.123.123.123, if I ping the newly created virtual machine (example.kvm.example.com) it fails, and if I ping the IP address that's been assigned to the new virtual machine (from the second subnet) it fails. Am I missing anything vital? Does this look (from what little information I can show) like it's setup correctly? Any help/pointers would be appreciated. For reference the Cloudmin documentation I am using as a guide is http://www.virtualmin.com/documentation/cloudmin/gettingstarted

    Read the article

  • psexec: "Access is Denied"?

    - by Electrons_Ahoy
    Inspired by my previous question here, I've been experimenting with PSExec. The goal is to trip off some fairly simple scripts / programs on one WindowsXP machine from another, and as PowerShell 2 doesn't yet do remoting on XP, PSexec seems like it'll solve my problems nicely. However, I can't get anything but the "Access is Denied" error. Here's what I've tried so far: I've got a pair of WindowsXP MCE machines, networked together in a workgroup without a server or domain controller. I've turned off "simple file sharing" on both machines. Under the security policy, Network Access: Sharing and Security model for local accounts is set to Classic, not Guest for both machines. There is an Administrative user for each computer that I know the passwords to. :) With all that, a command like "> psexec \\otherComputer -u adminUser cmd" prompts for the password (like it should) and then exits with: Couldn't access otherComputer: Access is denied. So, at this point I turn to the community. What step am I missing here?

    Read the article

  • Active RDP session over VPN getting disconnected

    - by Wandering Penguin
    I am having seemingly random disconnects of active RDP sessions (I am actively typing or otherwise interacting with the desktop) when connected over the VPN connection. The attempted to reconnect 1/20 pops up and proceeds all the way through 20 then drops. Once the session drops I can open a new session and connect again. This started happening about a week ago, The VPN connection is an IPSec VPN connection from a SonicWall NSA 2400. The NIC drivers are up to date. The VPN client is up to date. The firmware on the SonicWall is up to date (both regular and the early-release versions work the same). I have attempted to connect over three ISPs all with the same behavior. Two different workstations were used to test the VPN connection. The same behavior occurs when connecting to a domain workstation or server. If I am within the firewall I can connect to the same workstations and servers with the disconnect. The VPN connection has "enable fragmented packet handling" and "ignore DF (don't fragment) bit" set. Is there something I am missing in where I am looking for the problem?

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G EDIT1 I have succeeded to pin down the single windows API that causes the problem - DsGetDcName According to the windbg, the certutil -ping invokes it like so: PDOMAIN_CONTROLLER_INFO pdci; DWORD ret = ::DsGetDcName(NULL, NULL, NULL, NULL, DS_DIRECTORY_SERVICE_PREFERRED, &pdci); On my workstation it times out for 30 seconds and then returns error code 1355, which is ERROR_NO_SUCH_DOMAIN No domain controller is available for the specified domain or the domain does not exist. On another machine, which is accidentally a windows server 2003, it returns almost immediately with the correct domain controller name inside the returned DOMAIN_CONTROLLER_INFO structure. Now the question is what is missing on my workstation for that API to find the correct domain controller?

    Read the article

  • iSCSI errors continue after removing inaccessible target portal

    - by Ansgar Wiechers
    By mistake I entered an iSCSI target portal address in the iSCSI Initiator on one of our virtual servers that does not have an address in the network range used for iSCSI. This caused the following errors/warnings to appear in the eventlog: Log Name: System Source: MSiSCSI Event ID: 113 Level: Warning Description: iSCSI discovery via SendTargets failed with error code 0xefff0003 to target portal *192.168.23.42 0003260 Root\ISCSIPRT\0000_0 . Log Name: System Source: iScsiPrt Event ID: 1 Level: Error Description: Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data. Log Name: System Source: iScsiPrt Event ID: 70 Level: Error Description: Error occurred when processing iSCSI logon request. The request was not retried. Error status is given in the dump data. So far that's expected beahvior, so I removed the portal from the iSCSI Initiator as described in MSKB 976072. However, the errors/warnings keep appearing every hour, even though neither iSCSI Initiator GUI nor iscscli show any portals: C:\>iscsicli ListTargetPortals Microsoft iSCSI Initiator Version 6.1 Build 7601 The operation completed successfully. The problem persists after rebooting the server. Uninstalling the Microsoft iSCSI Initiator device via devmgmt.msc as well as changing the Initiator parameters like this: [HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}] "MaxPendingRequests"=dword:00000001 "MaxConnectionRetries"=dword:00000001 "MaxRequestHoldTime"=dword:00000005 didn't help either. Each change was followed by a reboot. Disabling the device does prevent the errors/warnings from re-appearing, of course, but I'd rather not have to resort to this. How can I prevent those errors and warnings from appearing (short of disabling the initiator device or re-installing the server)? What am I missing? Environment: The virtual machine runs on a Hyper-V cluster managed by SCVMM 2012. Hosts and guests run Windows Server 2008 R2 SP1. The physical machines are Dell PowerEdge M710HD blades.

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • Linux AMD-FX 8350 temperature monitoring

    - by HyperDevil
    I’m trying to get the CPU temperature for my AMD-FX8350 on Debian Squeeze. I ran sensors-detect and then sensors, but I only get my motherboard sensors (it8720-isa-0228). There are three temperature values there but I assume those are not for the CPU. it8720-isa-0228 Adapter: ISA adapter in0: +1.36 V (min = +0.00 V, max = +4.08 V) in1: +1.50 V (min = +0.00 V, max = +4.08 V) in2: +3.38 V (min = +0.00 V, max = +4.08 V) in3: +2.93 V (min = +0.00 V, max = +4.08 V) in4: +3.07 V (min = +0.00 V, max = +4.08 V) in5: +4.08 V (min = +0.00 V, max = +4.08 V) in6: +4.08 V (min = +0.00 V, max = +4.08 V) in7: +2.93 V (min = +0.00 V, max = +4.08 V) Vbat: +3.01 V fan1: 3375 RPM (min = 10 RPM) fan2: 0 RPM (min = 0 RPM) fan3: 1730 RPM (min = 10 RPM) fan5: 0 RPM (min = 0 RPM) temp1: +27.0°C (low = +127.0°C, high = +127.0°C) sensor = thermistor temp2: +53.0°C (low = +127.0°C, high = +127.0°C) sensor = thermal diode temp3: +65.0°C (low = +127.0°C, high = +90.0°C) sensor = thermal diode cpu0_vid: +0.000 V Is there anything I am missing? I also loaded the K8temp and K10temp modules and ran sensor-detect without any results. I do see this message in dmesg: hwmon-vid: Unknown VRM version of your x86 CPU

    Read the article

  • Launching firefox on remote server causes local firefox to start instead

    - by terdon
    Right, this is strange. I am connecting from my laptop (LMDE) to a remote host (SUSE linux enterprise) using ssh -X. I want to launch a firefox instance running on the remote server so I can have access to webpages on a private network. User@RemoteMachine $ which -a firefox /usr/bin/firefox User@RemoteMachine $ /usr/bin/firefox --version Mozilla Firefox 2.0.0.2, Copyright (c) 1998 - 2007 mozilla.org User@LocalMachine $ which -a firefox /usr/bin/firefox User@LocalMachine $ /usr/bin/firefox --version Mozilla Firefox 14.0.1 Now, if firefox is not running on the local machine, everything goes as expected and executing firefox on the remote machine causes a firefox (v 2.0) window running on the remote machine to show up. However, if firefox is running on the local machine a second window of firefox 14.0.1 running on the local machine appears. I have checked top in both machines. In the 2nd case, a firefox process briefely appears on the remote machine and then disappears when the local version of firefox is launched. My questions are the following: What gives? How/why can firefox connect to its existing instance on the local machine? The remote machine appears to have access to the local machine. It, in fact, appears to have the right to execute programs on my local machine. Am I missing something or is this just weird? Is this not a security risk?

    Read the article

  • Nameserver configuration error (Stealth NS records)

    - by Saif Bechan
    Hello i have a nameserver with a primary domain configured. Now i added a second domain, I have set NS records of the second domain to use the first domain, but i get some strange error. When i do the nameserver check at SIDN, for domains in the netherlands, i sais everything is right configured: Errors=0, Warnings=0, Informational=3 ** Summary: ACCEPTED centshopper.nl. ** Full check report: primary name server "ns1.rdshosting.nl." Info: name server looks correctly configured. secondary name server "ns2.rdshosting.nl." Info: name server looks correctly configured. secondary name server "ns3.rdshosting.nl." Info: name server looks correctly configured. ** DNScheck 4.2.6, 2010/03/12 23:19:58 CET+0100 Now when i check my dns settings over at http://intodns.com/centshopper.nl i get the following 2 errors: 1) Missing nameservers reported by parent FAIL: The following nameservers are listed at your nameservers as nameservers for your domain, but are not listed at the parent nameservers (see RFC2181 5.4.1). You need to make sure that these nameservers are working.If they are not working ok, you may have problems! ns3.rdshosting.nl 2) Stealth NS records sent Stealth NS records were sent: ns3.rdshosting.nl I am running plesk icw centos. In my opinion everything is ok. Does anyone know of this error and know what the possible cause would be. I have checked the first few hits on google already, and can't come up with a working solution. On a sidenote, can anyone explain to me what GLUE is and why i am not getting any. If you have been, thanks for reading!

    Read the article

  • Vagrant synced folders aren't case sensitive

    - by lvmisooners
    For our web stack, we are moving from a Windows Server to CentOS. To facilitate development, we're utilizing Vagrant to run CentOS VMs locally. We're using Vagrant's Synced Folders feature to allow devs to use their favorite IDEs on their host machine, but we're finding that one key feature is missing from this setup: file system case sensitivity. The synced folder inside the VM apparently takes on the properties of the host's file system, so if I'm developing from a Windows machine, or even OSX, the file system isn't case sensitive. This is a big issue, as our production servers will be pure CentOS, and its file system will be case sensitive. Case sensitivity is one of the main reasons we wanted to have a local VM. We want to prevent "It works on my machine!" Some workarounds we've considered or tried: Use lsyncd to sync from the vagrant share to a location within the VM that is case sensitive updating files on the host doesn't seem to generate the events in the VM that lsync listens to Make a case-sensitive partition on the host (Doesn't work for Windows) Use samba this may be an option, but we haven't vetted it yet. Is there a better way? Note that we have developers using Windows, OS X, and Ubuntu, and the solution needs to work everywhere.

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • FFMPEG Install on EC2 - Amazon Linux

    - by Oliver Holmberg
    Hello Serverfault friends, I am about two days into attempting to install FFMPEG with dependencies on an AWS EC2 instance running the Amazon Linux AMI. I've installed FFMPEG on Ubuntu and Fedora systems with no problems in the past, and have read reportedly successful instructions on installing on Red Hat/Fedora. I have followed a number of tutorials and forum articles to do so, but have had no luck yet. As far as I can tell, the main problems are as followed: The amazon linux (Most similar to red-hat/centos) yum repositories don't have ffmpeg available. I have found instructions to update the repositories to include the required packages, but adding these repositories cause yum to fail in updating packages. (Also, I've read some cautionary tales about adding redhat/centos repositories to amazon linux that lead me to believe it may be a bad idea) (https://forums.aws.amazon.com/thread.jspa?messageID=229166) I have tried a more complicated method of downloading the source tarball, compiling, and installing, but this always fails due to missing dependencies and other errors. On to my question: Has anyone successfully installed FFMPEG on Amazon Linux? Is there a fundamental incompatibility? If anyone could share specific instructions on installing ffmpeg on amazon linux I would be greatly appreciative. Any other insights/experiences would also be appreciated. Thanks in advance, Oliver

    Read the article

  • ERROR: Can't find the archive-keyring

    - by 23tux
    I'm trying to upgrade my Debian Lenny to Squeeze. I've replaced the word lenny to squeeze in sources.list and ran apt-get clean apt-get update apt-get dist-upgrade But after a while, I get this error Preconfiguring packages ... Setting up debian-archive-keyring (2010.08.28) ... ERROR: Can't find the archive-keyring Is the ubuntu-keyring package installed? dpkg: error processing debian-archive-keyring (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: debian-archive-keyring E: Sub-process /usr/bin/dpkg returned an error code (1) So I tried to install apt-get -f install debian-archive-keyring and I got the same error. Then I tried to install apt-get -f install ubuntu-keyring and I got this error: Reading package lists... Done Building dependency tree Reading state information... Done Package ubuntu-keyring is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package ubuntu-keyring has no installation candidate Maybe I have the wrong sources in my sources.list: deb ftp://mirror.hetzner.de/debian/packages squeeze main contrib non-free deb ftp://mirror.hetzner.de/debian/security squeeze/updates main contrib non-free deb http://ftp.de.debian.org/debian/ squeeze main non-free contrib deb-src http://ftp.de.debian.org/debian/ squeeze main non-free contrib deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free Hope anyone can help me, thx, tux

    Read the article

  • TFS 2010 : Unable to add Project to a collection

    - by Scott
    This morning I'm trying to setup Team Foundation Server 2010 to demo for my team. As this is just a demo, I thought I would install it on my Windows 7 machine which also serves as my development machine. My development machine uses Visual Studio 2008 Team Suite. I installed Team Explorer 2008 and then reapplied SP1. Finally I installed and setup TFS 2010. TFS by default gave me administrator privileges. I started up Visual Studios, and connected up to the Collection just fine. However, I'm unable to create a new project and get the follow error message: "TF30172: You are trying to create a team project either without required permissions or with an older version of team Explorer. Contact your project admin..." To check to permissions, I used my home computer which is running Visual Studio 2010. On this machine I was able to connect up to the same TFS instance and create a project no problem. So it looks as though it is a team explorer problem, but everywhere on the web people are saying not only am what I'm trying to do possible, but they have done it themselves. What am I missing to add a project to TFS 2010 under Visual Studio 2008?

    Read the article

  • Why is usable RAM less than total RAM?

    - by D Connors
    My girlfriend bought a laptop last week. It's a core 2 duo with 4 GB We installed vista 64bit, and one of the first things we did was right click on "My computer" to see gthe properties. Immediately we noticed something strange about her RAM, the line said: Installed memory (RAM): 4,00 GB (3,68 GB usable) I told her not to worry, thinking it must be something about the laptop hardware (considering her vista installation came from the same DVD as mine, and I never noticed anything like that on my 4 GB desktop). One hour ago, it got worse. We looked at Properties again, and it now says: Installed memory (RAM): 4,00 GB (2,98 GB usable) What does that mean? Are those 1,02 GB missing or being used by the system? EDIT: There is a possibility that the sytem information is wrong. I just noticed that it reports an intel T6500 processor, when it's actually a T6400. How can I find out how much RAM is really available to the system? EDIT2: Checking the resource monitors, it says 1003 MB are reserved for the hardware. Is that good or bad? Thanks

    Read the article

  • Ubuntu Server running VNC

    - by xwapilot
    I have access to four computers: 1 Ubuntu Server desktop (Version 10.04) 1 Mac Mini (Snow Leopard) 1 Windows desktop (Windows 7) 1 Windows laptop (Windows Vista) The first three will always be on the home network. My goal is to SSH from the laptop into the server and be able, through VNC (or another remote desktop software), to control the windows and mac computers. The goal of this would be a slightly heightened network security over using VNC to directly access the mac or windows desktop. I have successfully used SSH to connect to the server, but have not been able to successfully implement the remote desktop connection. I would appreciate help doing so. Here's what I've done so far: As per instructions here: http://www.stuartellis.eu/articles/vnc-on-linux/ I installed the following: vnc4server – the main VNC server software vnc-java – enables access from Web browsers with Java support xvnc4viewer – a basic VNC viewer I then set up a password using the vncpasswd command. To attempt to connect to the mac, I followed directions I found in a thread at superuser. com and went to "System PreferencesSharing" and enabled "Screen Sharing". Subsequently, I tried entering the following commands into Ubuntu: vncviewer mac_ip_address::5904 vncviewer mac_ip_address:0 vncviewer mac_ip_address:1 They all returned the following: VNC Viewer Free Edition 4.1.1 for X - built Apr 9 2010 18:41:55 Copyright (C) 2002-2005 RealVNC Ltd. See http ://www .realvnc. com for information on VNC. vncviewer: unable to open display "" I'm sure I'm missing something important, but I'm not sure what it is. Do I need to have a GUI installed, or did that come with the vnc packages I installed?

    Read the article

  • Dual NVidia graphics cards in Ubuntu / xorg.conf mania

    - by John Zwinck
    I have two NVidia graphics cards: Quadro NVS 295 (PCI Express, dual DisplayPort outputs) GeForce FX 5200 (PCI, DVI and VGA outputs) I have three identical monitors, two on DisplayPort and one on DVI. I'm on Ubuntu Hardy (and cannot currently dist-upgrade for separate reasons). I use the "nvidia" driver. What's new is the GeForce card and the third monitor. I currently have the dual DisplayPort monitors working fine. Here are the display-related parts of my xorg.conf: Section "ServerLayout" Identifier "Default Layout" Screen "PCI-Express Screen" 0 0 # adding this makes X fail to start: Screen "PCI Screen" 0 Inputdevice "Generic Keyboard" Inputdevice "Configured Mouse" EndSection Section "Module" Load "glx" # not sure why/if this is needed EndSection Section "Monitor" Identifier "DELL 2408WFP" Option "DPMS" EndSection Section "Device" Identifier "NVIDIA Quadro NVS 295" Driver "nvidia" Option "RenderAccel" "true" Screen 0 BusID "PCI:2:0:0" EndSection Section "Device" Identifier "NVIDIA GeForce FX 5200" Driver "nvidia" Option "RenderAccel" "true" Screen 1 BusID "PCI:6:4:0" EndSection Section "Screen" Identifier "PCI-Express Screen" Device "NVIDIA Quadro NVS 295" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+1200, 1920x1200 +0+0" EndSection Section "Screen" Identifier "PCI Screen" Device "NVIDIA GeForce FX 5200" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+0" EndSection I use nvidia-settings to configure my monitors, and it does not show the second GPU. lspci, though, shows: 02:00.0 VGA compatible controller: nVidia Corporation Unknown device 06fd 06:04.0 VGA compatible controller: nVidia Corporation NV34 [GeForce FX 5200] Which is where I got the BusID settings for the two devices (when I just had one device, I didn't have any BusID listed...and adding the BusID hasn't broken anything). What am I missing? How can I make nvidia-settings show my second GPU so I can then configure its monitor?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >