Search Results

Search found 13788 results on 552 pages for 'instance'.

Page 14/552 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Loading index in MamoryIndex instance

    - by Javi
    Hello, Is there any way to load an existing index into an instance of MemoryIndex?. I have an application which uses Hibernate Search so I can use index() in FullTextEntityManager instance to index an object. I'd like to recover back the created index and insert it into a MemoryIndex instance to execute several queries over it. Is it possible? Thanks.

    Read the article

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server

    - by sf
    Hi, I'm getting the following error when trying to load an Asp.NET MVC App on IIS 7 with Sql Server 2008 Express. The App uses Linq to SQL. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've done some searching and all answers point to enabling TCP connections in Sql Server Configuration which I have done to no avail. The connection string I am using is: Server=SERVERNAME\SQLEXPRESS;Database=DBName;Integrated Security=true The catch. I have another app that already could talk to the Sql Server just fine. Even before playing around with the Sql Server Configuration Settings. The other app uses the following connectionstring: Data Source=SERVERNAME\SQLEXPRESS;Initial Catalog=OtherDbName;Integrated Security=True;Persist Security Info=False;Connect Timeout=120 I've tried this connectionstring on the app that isn't working and it still doesn't work. Please help. I think i'm about to go crazy

    Read the article

  • Deploying a Git server in a AWS linux instance

    - by Leroux
    I'm making a git server on my linux instance in AWS. I tried doing it using these instructions but in the end I always get stuck with a "Permission denied (publickey)" message. So here is my detailed steps, the client is my windows machine running mysysgit and the server is the AWS ubuntu instance : 1) I created user Git with a simple password. 2) Created the ssh directory in ~/.ssh 3) On the client I created ssh keys using ssh-keygen -t rsa -b 1024, they got dropped in my /Users/[Name]/.ssh directory, id_rsa and id_rsa.pub key pair was created. 4) Using notepad I copy pasted the text into newly created files on the server in the ~/.ssh directory of my Git user. ~/.ssh/id_rsa and **~/.ssh/id_rsa.pub** were copied. 5) On the server I made the authorized_hosts file using "cat id_rsa.pub authorized_hosts" (while inside the .ssh directory) 6) Now to test it, on my client machine I did ssh -v git@[ip.address] 7) Result : debug1: Host 'ip.address' is known and matches the RSA host key. debug1: Found key in /c/Users/[Name]/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/[Name]/.ssh/identity debug1: Trying private key: /c/Users/[Name]/.ssh/id_rsa debug1: Offering public key: /c/Users/[Name]/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I would appreciate any insight anyone can give me.

    Read the article

  • Drive Mapped On Amazon EC2 Instance Startup Disconnected After Logon

    - by jsn
    I am launching an Amazon EC2 instance (Windows Server 2012), and on startup (though User Data field), it runs this powershell script: <powershell> NET CONFIG SERVER /AUTODISCONNECT:-1 # clear all prior connections net use * /delete /y > C:\delLog.txt 2>&1 # mount new drive net use R: \\dbHost\share /user:username pass /persistent:yes > C:\useLog.txt 2>&1 ipconfig /all > C:\ipLog.txt </powershell> When it launches and I connect to it through RDP, it shows in explorer "Diconnected Network Drive (R:). If I double click it, error message displays "R:\ is not accesible. Access id denied". Normally, it would ask me for credentials to reconnect. I need for this drive to be connected through the duration of the instance. delLog.txt contents: You have these remote connections: T: \\dbHost\share Continuing will cancel the connections. The command completed successfully. useLog.txt contents: The command completed successfully. ipLog.txt contents as expected. The net use commands works fine by itself, it connects. Anyone have any idea what could be wrong? There is only one account on these machines - Administrator. It is a samba share to a Linux server on a private network.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • SQL cluster instance names for large project

    - by Sam
    We're setting up two clusters. One dev and one prod. The Production will host two SQL instances - a OLTP and a DW. The development will host 4 OLTP non-production environments and at least one DW non-production. We're working on getting more DW non-prods and possibly more OLTP systems. I'm considering a naming scheme like this, where PROJ would be 3 initials for the project name. Dev Cluster MSSQLPROJD1\D1 (DEV) MSSQLPROJD2\D2 (TEST) MSSQLPROJD3\D3 (QA) MSSQLPROJD4\D4 (STAGE) MSSQLPROJD5\D5 (DW) Prd Cluster MSSQLPROJP1\P1 (PRD) MSSQLPROJP2\P2 (DW) To the left of the slash, each name must be unique network wide. On each server, the instance name, to the right of the slash, must be unique. Any thoughts on this? I'm trying to avoid having instance names drifting from reality as the project progresses - say we change what we call a certain environment or want to repurpose one. Then we can update a listing of the purposes for the instances and be done with it. How has a scheme like this worked out for you? Maybe you do things another way in your shop - tell me about it. Thanks.

    Read the article

  • Configuring a MySQL 5.1 Instance on Windows 7 Professional x64 Fails

    - by Thomas Owens
    I'm trying to set up my laptops to function as mobile development environments. Installing the software on my Linux machine and getting it configured was fairly straightforward, however I'm having trouble getting MySQL 5.1 Server installed and configured on Windows 7 Professional 64-bit. I'm currently using the Windows MSI Installer for the complete MySQL 5.1 system (as opposed to the Essentials installer also available). I've tried to install using both the 32-bit and 64-bit versions of MySQL 5.1 - the same events occur in both. I've installed both the Server Instance Configuration Wizard and Workbench and everything appears to be installed just fine. When I open the Instance Configuration Wizard, I select Detailed Configuration. On the next screen, I select Development Environment, then Multifunctional Database on the next screen. I leave the InnoDB settings unchanged. I select Manual Setting with 5 concurrent connections. I enable TCP/IP Networking on Port 3306 and Enable Strict Mode. I select the Standard Character Set. I check the boxes for Install as a Windows Service (and provide the name "MySQL") and Include the Bin Directory in Windows PATH. On the next screen, I set my root user name and password. I do not enable root access from remote machines and I also do not create an anonymous account. On the final screen of the wizard, when I click "Execute", the first two tasks (Prepare Configuration and Write Configuration File) complete. However, when it reaches Start Service, the wizard hangs and becomes unresponsive ("Not Responding" appears in the title bar and Task Manager). I would really like to be able to use both my Windows and Linux laptops as full-blown mobile development environments, but I can't do that without being able to run MySQL. Has anyone encountered this problem before? What options do I have to correct it?

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Windows Azure Error: Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found

    - by DigiMortal
    When running some of your Windows Azure applications when storage emulator is not configured you may get the following error: "Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found.   Please configure the SQL Server instance for Storage Emulator using the ‘DSInit’ utility in the Windows Azure SDK.". Here’s how to solve this problem. You need to run DSInit utility to create database. For Windows Azure SDK 1.6 the location for DSInit utility is: C:\Program Files\Windows Azure Emulator\emulator\devstore By default DSInit expects that your database server is (local)\SQLEXPRESS but you can change it easily. If you have MSSQL instance called SQLEXPRESS then it is enough to just run DSInit. If you need some other instance then run the following command: DSInit /sqlinstance:<instance name> For default instance use “.” as instance name: DSInit /sqlinstance:. You can find more information about sqlinstance and other switches from DSInit documentation. If database was correctly created you should see dialog like this: When storage database is ready you can run your application.

    Read the article

  • DHCP reply packets do not make it into KVM instance in OpenStack

    - by Lorin Hochstein
    I'm running a KVM instance inside of OpenStack, and it isn't getting an IP address from the DHCP server. Using tcpdump, I can see the request and reply packets on vnet0 of the compute host: # tcpdump -i vnet0 -n port 67 or port 68 tcpdump: WARNING: vnet0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vnet0, link-type EN10MB (Ethernet), capture size 65535 bytes 19:44:56.176727 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:44:56.176785 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:44:56.177315 IP 10.40.0.1.67 > 10.40.0.3.68: BOOTP/DHCP, Reply, length 319 19:45:02.179834 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:45:02.179904 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:45:02.180375 IP 10.40.0.1.67 > 10.40.0.3.68: BOOTP/DHCP, Reply, length 319 However, if I do the same thing on eth0 inside the KVM instance, I only see the request packets, not the reply packets. What would prevent the packets from making it from vnet0 of the host to eth0 of the guest? My host is running Ubuntu 12.04 and my guest is running CentOS 6.3. Note that I have added this rule in my iptables, but it doesn't resolve the issue: -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill The instance corresponds to vnet0 and is connected via br100: # brctl show bridge name bridge id STP enabled interfaces br100 8000.54781a8605f2 no eth1 vnet0 vnet1 virbr0 8000.000000000000 yes Here's the full iptables-save: # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *nat :PREROUTING ACCEPT [8323:2553683] :INPUT ACCEPT [7993:2494942] :OUTPUT ACCEPT [6158:461050] :POSTROUTING ACCEPT [6455:511595] :nova-compute-OUTPUT - [0:0] :nova-compute-POSTROUTING - [0:0] :nova-compute-PREROUTING - [0:0] :nova-compute-float-snat - [0:0] :nova-compute-snat - [0:0] :nova-postrouting-bottom - [0:0] -A PREROUTING -j nova-compute-PREROUTING -A OUTPUT -j nova-compute-OUTPUT -A POSTROUTING -j nova-compute-POSTROUTING -A POSTROUTING -j nova-postrouting-bottom -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE -A nova-compute-snat -j nova-compute-float-snat -A nova-postrouting-bottom -j nova-compute-snat COMMIT # Completed on Tue Apr 2 19:47:27 2013 # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *mangle :PREROUTING ACCEPT [7969:5385812] :INPUT ACCEPT [7905:5363718] :FORWARD ACCEPT [158:48190] :OUTPUT ACCEPT [6877:8647975] :POSTROUTING ACCEPT [7035:8696165] -A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill COMMIT # Completed on Tue Apr 2 19:47:27 2013 # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *filter :INPUT ACCEPT [2196774:15856921923] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2447201:1170227646] :nova-compute-FORWARD - [0:0] :nova-compute-INPUT - [0:0] :nova-compute-OUTPUT - [0:0] :nova-compute-inst-19 - [0:0] :nova-compute-inst-20 - [0:0] :nova-compute-local - [0:0] :nova-compute-provider - [0:0] :nova-compute-sg-fallback - [0:0] :nova-filter-top - [0:0] -A INPUT -j nova-compute-INPUT -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT -A FORWARD -i virbr0 -o virbr0 -j ACCEPT -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j nova-filter-top -A OUTPUT -j nova-compute-OUTPUT -A nova-compute-FORWARD -i br100 -j ACCEPT -A nova-compute-FORWARD -o br100 -j ACCEPT -A nova-compute-inst-19 -m state --state INVALID -j DROP -A nova-compute-inst-19 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-19 -j nova-compute-provider -A nova-compute-inst-19 -s 10.40.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-19 -s 10.40.0.0/16 -j ACCEPT -A nova-compute-inst-19 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-19 -p icmp -j ACCEPT -A nova-compute-inst-19 -j nova-compute-sg-fallback -A nova-compute-inst-20 -m state --state INVALID -j DROP -A nova-compute-inst-20 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-20 -j nova-compute-provider -A nova-compute-inst-20 -s 10.40.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-20 -s 10.40.0.0/16 -j ACCEPT -A nova-compute-inst-20 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-20 -p icmp -j ACCEPT -A nova-compute-inst-20 -j nova-compute-sg-fallback -A nova-compute-local -d 10.40.0.3/32 -j nova-compute-inst-19 -A nova-compute-local -d 10.40.0.4/32 -j nova-compute-inst-20 -A nova-compute-sg-fallback -j DROP -A nova-filter-top -j nova-compute-local COMMIT # Completed on Tue Apr 2 19:47:27 2013

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • SQL instance through F5 BigIP VIP

    - by Sam
    We have a SQL 2008 instance Server01\instance02. The port is set to 1466. We've set up a VIP named Server1 Attempting to connect to Server1\instance02 does not work. Server1,1466 does work. We have ports 1433, 1434 and 1466 open. Can we configure this to be able to use the name without any changes to the SQL client. Thanks!!

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • HTTP not working EC2 instance with own domain name

    - by bogdanvursu
    I have this problem I've already posted on the Amazon AWS forum. Unfortunately I haven't got a clear answer I and I was hoping you guys could help. Here's the link: http://developer.amazonwebservices.com/connect/thread.jspa?messageID=198238#198207 Basically I don't know why after associating an Elastic IP address and mapping it to one of my domains, FTP an ping work fine, but HTTP does a 302 redirect to the Amazon AWS hostname I had before associating the Elastic IP address. Here's the question from the AWS forum: I have an EC2 instance with HTTP and FTP installed. They both worked. Then I associated an Elastic IP address to that instance. Then I mapped that IP address to a name which is a subdomain of a domain I own. I think it's an A name (I didn't do the mapping personally). Now FTP works and HTTP doesn't. The AWS host name before the Elastic IP association: ec2-184-73-27-8.compute-1.amazonaws.com The AWS IP address and host name after the association: 174.129.7.254 and ec2-174-129-7-254.compute-1.amazonaws.com The domain which is mapped to 174.129.7.254 using an A record is: demo.flashxml.net FTP works means that I can connect to both 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com and demo.flashxml.net. HTTP doesn't work means that a HTTP request to 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com or demo.flashxml.net returns a 302 redirect to ec2-184-73-27-8.compute-1.amazonaws.com Here is my VirtualHost file: <VirtualHost *:80> DocumentRoot /home/ec2-user/public_html/wordpress ServerName demo.flashxml.net ErrorLog logs/ec2-user-error_log <Directory /home/ec2-user/public_html/wordpress> AllowOverride FileInfo Order Deny,Allow Allow from All </Directory> </VirtualHost> I finally figured out what was wrong. It's the fact that I installed Wordpress on the server using the hostname provided by Amazon. After associating the Elastic IP and updating the DNS records, the server was reachable - FTP working was the proof of that. The 302 redirect when accessing via HTTP was caused by Wordpress's hostname settings. So, what I've learned from all this was that I should setup my IP and DNS first and only after that install Wordpress or any other web app(s).

    Read the article

  • Run a script prior to start of SQL instance via Windows clusters

    - by Shahryar G. Hashemi
    Hi, We have a Windows 2008 cluster with several SQL 2008 instance. We would like to run a script that modifies 4 registry keys prior to the startup of SQL. I do not know if there is a way to have a script run through Windows 2008 clustering that does that. I have a VBS script to do it and tried to add a Generic Script to an existing cluster group, but it failed saying it could not be registered. Any ideas?

    Read the article

  • how to implement single instance of web application

    - by Lalchand
    I want to run only one instance of my web-application which is deployed in tomcat 5.5 how to implement it. for example if the system has 2 tomcat server each having the web application name xxx i don't want the two application run in parallel only one should run at a time. suppose if the user access the index.jsp inside tomcat 1 and after that when he try to access the index.jsp from tomcat 2 it should n't happen

    Read the article

  • Amazon blocked port 80 and 443 on my instance

    - by Burak
    Amazon AWS sent me an email about warning that my instance have been behaving like Phising that is against AWS Customer Agreement. And they noticed that they blocked port 80 and 443 which are for HTTP and HTTPS respectively. Google Safe Browsing also reported that some code injection was made to one of my websites. After a cleaning, Google stopped blocking my website displaying in the search result. So, how can I unblock my ports?

    Read the article

  • multiple VM vs multiple named instance

    - by thushya
    Hi , I am looking for some comparison or data for sql 2008 deployment , what are the advantages and disadvantages installing multiple VM vs multiple named instance ? How can i save license cost using VMs vs physical server for sql 2008 ? is there a way to find out what is maximum number of connections to database at any time or in the past - need to calculate needed CAL license ? Thanks.

    Read the article

  • Node app crashes after a while on micro EC2 instance even with supervisor

    - by Justin Meltzer
    I'm using supervisor to start my node.js application on a micro EC2 instance. However, the app only stays running for some time until it eventually shuts down. Not exactly sure how long the app stays running but I'm guessing for about a few hours or so. Sometimes less. My question is where on the remote server should I be looking in order to debug this kind of issue? I'm running an Amazon Linux AMI.

    Read the article

  • windows server 2008 instance on KVM stuck in 'Pause' mode

    - by mdoust
    I have a windows Server 2008 instance as a guest on KVM. When I tried to run it via virtual machine manager, the process got frozen after few seconds and trying to resume the process lead to this error from virt-manager: Error unpausing domain: internal error unable to execute QEMU command 'cont': Resetting the Virtual Machine is required I tried to Reboot it without any success. Any help will be highly appreciated.

    Read the article

  • Amazon ec2 reserve instance

    - by lydonchandra
    Hi, We received the education credit (valid for 1 year) from Amazon to use, and just wondering if we can buy reserved instance (3years) using that credit? Is there any way to reserve how much bandwidth we can use too ? Thanks

    Read the article

  • Uploading files to EC2 Windows instance

    - by nitramk
    I've created an instance of a Windows Server 2008 AMI at Amazon EC2. I now need to upload some installation files to it. One way to do this would be to activate the FTP server in Windows, set up an account and use that to upload files. Is there a better way to do this? Maybe some way to upload directly to an EBS?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >