Search Results

Search found 452 results on 19 pages for 'provisioning'.

Page 15/19 | < Previous Page | 11 12 13 14 15 16 17 18 19  | Next Page >

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • CentOS 6.3 Virtual under OpenVZ cannot ping, host lookups, outbound connections while postfix running

    - by Paul Cravey
    My best theory is that some kernel limit is being hit preventing outbound connections. We have tried basically everything from tcpdumps to provisioning an entirely new virtual server (we do not have this problem on any other virtuals), however the problem somehow carried over, even with new postfix build (working). Emails work, and outbound connections work, so long as postfix does not have too much going on. /proc/user_beancounters shows no limits being hit (show below). Nevertheless, pings fail even to IP addresses. TCP stack appears healthy. Load is low. No iowait. Flushed iptables already. Has anyone experienced anything like this? uid resource held maxheld barrier limit failcnt 3: kmemsize 166216365 170262528 9223372036854775807 9223372036854775807 0 lockedpages 0 0 9223372036854775807 9223372036854775807 0 privvmpages 285727 351885 9223372036854775807 9223372036854775807 0 shmpages 16933 17605 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 numproc 150 303 9223372036854775807 9223372036854775807 0 physpages 314156 326191 0 1280000 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 165355 165355 9223372036854775807 9223372036854775807 0 numtcpsock 89 172 9223372036854775807 9223372036854775807 0 numflock 22 76 9223372036854775807 9223372036854775807 0 numpty 1 2 9223372036854775807 9223372036854775807 0 numsiginfo 0 75 9223372036854775807 9223372036854775807 0 tcpsndbuf 2733472 4371752 9223372036854775807 9223372036854775807 0 tcprcvbuf 1798336 5427296 9223372036854775807 9223372036854775807 0 othersockbuf 491120 1000760 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 238728 9223372036854775807 9223372036854775807 0 numothersock 361 505 9223372036854775807 9223372036854775807 0 dcachesize 135941831 136114679 9223372036854775807 9223372036854775807 0 numfile 2905 4990 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 8 9 9223372036854775807 9223372036854775807 0 [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. --- 4.2.2.1 ping statistics --- 9 packets transmitted, 0 received, 100% packet loss, time 8493ms [root@bni /]# service postfix stop [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. 64 bytes from 4.2.2.1: icmp_seq=1 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=2 ttl=53 time=8.62 ms 64 bytes from 4.2.2.1: icmp_seq=3 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=4 ttl=53 time=8.66 ms Outbound connections of all sorts fail when postfix is running.

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Extending ext4 partition on debian7.0 on vsphere

    - by VoidPointer
    I have allocated thin provisioning of 15GB when i found 8GB as insufficient. Now debian guest is not able to recognize the change of size. root@debian7-x64:~# lvdisplay --- Logical volume --- LV Path /dev/debian7-x64/root LV Name root VG Name debian7-x64 LV UUID EU6mg0-XTXC-ci3D-bQJi-7XN6-r8Hp-SYxcj0 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 1 LV Size 7.39 GiB Current LE 1892 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/debian7-x64/swap_1 LV Name swap_1 VG Name debian7-x64 LV UUID xDNtoz-tJUq-M5D6-GGCN-gzcD-fwUv-fYYDR1 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 2 LV Size 376.00 MiB Current LE 94 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 root@debian7-x64:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name debian7-x64 PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1986 Free PE 0 Allocated PE 1986 PV UUID SehkzH-Gq8Y-jI2f-27Tb-uv1Z-tR1R-5OnTxR root@debian7-x64:~# sfdisk -s /dev/sda: 15728640 /dev/mapper/debian7--x64-root: 7749632 /dev/mapper/debian7--x64-swap_1: 385024 total: 23863296 blocks Help me to extend this partition. No problem in rebooting. I dont have any live CD. Environment : debian 7, with lvm, on vsphere, ext4 partition. Can provide more details when needed.

    Read the article

  • Is there a limit to how many sites can be hosted on a single IP address when using HTTP Host Headers on Windows 2008?

    - by Kev
    For reasons that are lost in the mists of time, our older Windows (2000, 2003) servers have been configured with a "Administrative" IP address and three further "Hosting" IP addresses. There are also additional IP's for sites with SSL certificates. The "Administrative" IP address is where all our internal provisioning, monitoring and other such apps are bound to. We lock this down and don't permit access to it from the outside world (other than over our VPN). The three "Hosting" IP addresses are used for IIS website hosting (in conjunction with host headers). Historically, new site IP address allocations have been rotated through these three IP addresses. I'm not really sure why. I'm building a new batch of servers and I'm considering just having a single hosting IP address. Our servers can host up to 1200 sites on a single machine. Is there a technical limit to the number of IIS sites that can bind to a single IP address? Our Linux platform seems to do just fine with just a single shared IP + host headers. I initially thought this might be an SEO thing, but given that IPv4 address space conservation is paramount I hardly think Google or other search engines could reasonably penalise site rankings just because hundreds of sites hang off the same IP.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • Knopflerfish packaging

    - by Jens
    Hello, I am at the moment creating a matrix which is showing how far Knopflerfish, Equinox and Felix are OSGi 4.2 compliant. So far I looked at the Knopflerfish documentation (Link 1, Link 2) to get an idea of how much of the Core and Compendium specs are actually implemented. The core specification seems to be fully implemented, although there are some inconsistent statements about the Security Layer and the Declarative Services. What makes me wonder is how much of all the Compendium specs are implemented: Remote Services Log Service Http Service Device Access Configuration Admin Service Metatype Service Preferences Service User Admin Service Wire Admin Service IO Connector Service Initial Provisioning UPnP Device Service Declarative Services Event Admin Service Deployment Admin Auto Configuration Application Admin DMT Admin Service Monitor Admin Service Foreign Application Access Blueprint Container Tracker XML Parser Service Position Measurement and State Execution Environment To find out more I downloaded (Download page) the source code of Knopflerfish and had a look at it. It looks like some parts of the spec are implemented through the "original" framework provided by the OSGi Alliance (org.osgi.*). One example is the UPnP package: Does this mean that missing parts which are not directly implemented by Knopflerfish are added through the "original" OSGi framework? And does this also apply to other frameworks like Felix or Equinox?

    Read the article

  • How to create an Universal Binary for iTunes Connect Distribution?

    - by balexandre
    I created an app that was rejected because Apple say that my App was not showing the correct iPad window and it was showing the same iPhone screen but top left aligned. Running on simulator, I get my App to show exactly what it should, a big iPad View. my app as Apple referees that is showing on device: my app running the simulator (50% zoom only): my code in the Application Delegate is the one I published before - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // The default have the line below, let us comment it //MainViewController *aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; // Our main controller MainViewController *aController = nil; // Is this OS 3.2.0+ ? #if __IPHONE_OS_VERSION_MAX_ALLOWED >= 30200 if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) // It's an iPad, let's set the MainView to our MainView-iPad aController = [[MainViewController alloc] initWithNibName:@"MainView-iPad" bundle:nil]; else // This is a 3.2.0+ but not an iPad (for future, when iPhone/iPod Touch runs with same OS than iPad) aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; #else // It's an iPhone/iPod Touch (OS < 3.2.0) aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; #endif // Let's continue our default code self.mainViewController = aController; [aController release]; mainViewController.view.frame = [UIScreen mainScreen].applicationFrame; [window addSubview:[mainViewController view]]; [window makeKeyAndVisible]; return YES; } on my target info I have iPhone/iPad My question is, how should I build the app? Use Base SDK iPhone Simulator 3.1.3 iPhone Simulator 3.2 my Active Configuration is Distribution and Active Architecture is arm6 Can anyone that already published app into iTunes Connect explain me the settings? P.S. I followed the Developer Guideline on Building and Installing your Development Application that is found on Creating and Downloading Development Provisioning Profiles but does not say anything regarding this, as I did exactly and the app was rejected.

    Read the article

  • "error creating files compilation failed" when publishing for iOS on Flash CS5.5

    - by user1662660
    I'm currently developing an app using Flash CS5.5 for iOS. All of my .ipa files have been created and tested with no problems so far. Until now. I'm using Windows. It only started today and happens to every file I try to publish. I get the error "error creating files. compilation failed..." That is the only piece of information I am given about the error. All of my certificates and provisioning profiles are legit and still valid. Has anyone had any experience with this before? Since posting the question I have found a resolution that I will post for anyone who encounters the same problem. In this case I was unable to publish my .ipa file as I was opening the document (and Flash) from the start menu in Windows 7. To fix this instance of the issue I simply opened Flash first and then opened my document and I was able to publish.

    Read the article

  • How to fix codesign when it says user cancelled the operation? (And I didn't)

    - by eipipuz
    I'm compiling an iPhone app meant to be Distributed. It's my first app so I followed the "iPhone Provisioning Profiles" instructions. Unfortunately it fails with this: CodeSign build/*_*_.app cd "/Users/videojuegos/Documents/*_*_" setenv IGNORE_CODESIGN_ALLOCATE_RADAR_7181968 /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /usr/bin/codesign -f -s "iPhone Distribution: ******" "--resource-rules=/Users/videojuegos/Documents/*_*_/build/*_*_.app/ResourceRules.plist" --entitlements "/Users/videojuegos/Documents/*_*_/build/Unity-iPhone.build/Distribution-iphoneos/Unity-iPhone.build/*_*_.xcent" "/Users/videojuegos/Documents/*_*_/build/*_*_.app" /Users/videojuegos/Documents/*_*_/build/*_*_.app: The operation was cancelled by the user. Command /usr/bin/codesign failed with exit code 1 I thought Keychain wasn't allowing Codesign to work but as far as I can tell that isn't the case. I also attempted running these commands from a terminal and it failed with this message: Users/videojuegos/Documents/*_*_/build/Unity-iPhone.build/Distribution-iphoneos/Unity-iPhone.build/*_*_.xcent: cannot read entitlement data I have made the xcode setting from scratch three times. Googled it. No results. I don't have any idea what else to try. Any suggestions?

    Read the article

  • register device at run time

    - by user177893
    In the App ID section of the Program Portal, locate the App ID you wish to use with the Apple Push Notification service. Only App IDs with a specific bundle ID can be used with the APNs. You cannot use a “wild-card” application ID. You must see “Available” under the Apple Push Notification service column to register this App ID and configure a certificate for this App ID. Click the ‘Configure’ link next to your desired App ID. In the Configure App ID page, check the Enable Push Notification Services box and click the Configure button. Clicking this button launches the APNs Assistant, which guides you through the next series of steps that create your App ID specific Client SSL certificate. Download the Client SSL certificate file to your download location. Navigate to that location and double-click the certificate file (which has an extension of cer) to install it in your keychain. When you are finished, click Done in the APNS Assistant. Double-clicking the file launches Keychain Access. Make sure you install the certificate in your login keychain on the computer you are using for provider development. The APNs SSL certificate should be installed on your notification server. When you finish these steps you are returned to the Configure App ID page of the iPhone Dev Center portal. The certificate should be badged with a green circle and the label “Enabled”. To complete the APNs set-up process, you will need to create a new provisioning profile containing your APNs-enabled App ID. IS it posssible to do theses steps through code.

    Read the article

  • Big problems on iPhone ad hoc build -

    - by phil swenson
    No matter what I do I can't get my ad hoc provisioning profile to work. In Organizer, I always get "A valid signing identity matching this profile cannot be found in your keychain" for my adhoc profile. I have my distribution cert installed in my login keychain. I dragged the adhoc mobileprovision file to XCode... that's pretty much all there is to it, right? I searched around and found suggestions like re-creating the cert/profile. Did that, same thing. Also make sure your login keychain is default. It is. Even tried a different computer. Same result. In XCode AdHoc target I don't have a distribution target to pick. This all used to work, but obviously I messed something up..... Perhaps my process is just wrong (it's been months since I did this). Does someone have a step by step list of how to set up for adhoc distribution?

    Read the article

  • How to install previously-archived apps from xcode organizer to my iphone

    - by Ben Clayton
    Hi all. Xcode keeps an archive of all the versions of my apps that I've submitted to the app store in the 'archived applications' section. I assumed using this I could install an old version of an app to my device, in order to reproduce any problems my client may have had with that particular version. However, when I try to do this I get an error: 'this executable was signed with invalid entitlements, the entitlements specified in your applications code signing entitlements do not match those specified in your provisioning profile' The original app was signed using our App Store distribution certificate, and I use the Organizer interface to re-sign it using our Developer profile. select the archived app select the version I want to test click 'share' select 'iphone developer' next to identity save to disk (saves the ipa file) then copy the ipa to the device using the little + button you see next to 'applications' on the screen you get when you select the connected device. Then I get the error, and the app isn't installed. Is there something obvious I'm doing wrong here? Or is there a different process to re-install an archived app to my device? Thanks,

    Read the article

  • Uploading Binary iPhone App "The signature was invalid" again again and again...

    - by user338386
    Hello! I'm going crazy! I'm trying to upload the binary of my first application but I have always the same error! "The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate." I did everything, EVERYTHING!! I created the request for the certificate, used it for both developer and distribution certificate, created the provisioning profile (12 times!!!) always cleaning my keychain and my Xcode deleting the old certificates and profiles.. I reboot the machine, reboot Xcode, the log is correct, but... I can't upload my app!!!! Checked if my iPhone is connected (i tried with iPhone disconneted too). I checked the certificate in both my project settings "Distribuition" Configuration (duplicate of "Release" configuration) and in my target settings. Reveal in finder, compress the app and sent the zip... I tried with Application Loader and iTunes connect online.. but nothing! NOTHING!! I've spent 8 hours! And again i can't have my app uploaded!!! I'm really going crazy! Can anyone help me pleeease? Thx!

    Read the article

  • iPhone - Problem with in-app purchases

    - by Satyam svv
    I've created iPhone app with in-app purchase. Now, I'm in testing phase. I created provisioning profile com.satyam.testapp In iTunes connected I created the application and uploaded the images, screen shots, desscription etc. I also created two id's for in-app purchase. One is com.satyam.testapp.book1 and the other one is com.satyam.testapp.book5 I created test account also for verifying my in-app purchases. Using com.stayam.testapp i created developer test profile and using the same in my developed application. I logged out the itunes app store account in my iphone. Now i started running my application on my iphone. Its saying that no items are there to purchase. But its not even asking me for credentials where i've to enter test account username and password..... how to debug it? Here's my delegate: - (void)productsRequest:(SKProductsRequest *)request didReceiveResponse:(SKProductsResponse *)response { NSArray *myProduct = [[NSArray alloc] initWithArray:response.products]; for(int i=0;i<[myProduct count];i++) { SKProduct *product = [myProduct objectAtIndex:i]; NSLog(@"Name: %@ - Price: %f",[product localizedTitle],[[product price] doubleValue]); NSLog(@"Product identifier: %@", [product productIdentifier]); } for(NSString *invalidProduct in response.invalidProductIdentifiers) NSLog(@"Problem in iTunes connect configuration for product: %@", invalidProduct); [request autorelease]; [myProduct release]; }

    Read the article

  • Help with why my app crashed?

    - by Moshe
    I'm writing an iPad app that is a "kiosk" app. The iPad should be hanging on the wall and the app should just run. I did a test, starting the app last night (Friday, December 31) and letting it run. This morning, when I woke up, it was not running. I just checked the iPad's console and I can't figure out why it crashed. The iPad was plugged in and so the battery is not the issued. I did disable the idleTimer in my application delegate. The app was seen running as late as midnight last night. I would like to note that my app acts as a Bluetooth server through Game Kit and large portion of the console output is occupied by bluetooth status messages. When I opened the iPad, the app was paused and there was a system alert which prompted me to check an "Expiring Provisioning Profile". I tapped "dismiss" and the alert went away. The app crashed about a second after I dismissed the system alert. Any ideas how I can diagnose this problem? Why would my app crash? Here is my iPad's Console log, as copied from Xcode's organizer. Edit: A bit of Googling lead me to this site which says that alert views cause the app to lose focus. Could that be involved? What can I do to fix the problem? EDIT2: My Crash log describes the situation as: Application Specific Information: appname failed to resume in time Elapsed total CPU time (seconds): 10.010 (user 8.070, system 1.940), 100% CPU Elapsed application CPU time (seconds): 9.470, 95% CPU

    Read the article

  • CoreMidi _MIDINetworkNotificationContactsDidChange symbol not found

    - by Domestic Cat
    I'm getting the following error after a crash in an iPad app that uses CoreMIDI (The * are to blank out the app name): Dyld Error Message: Symbol not found: _MIDINetworkNotificationContactsDidChange Referenced from: /var/mobile/Applications/8F08B78E-929D-4C5A-9F02-08FD5743C17F/***.app/*** Expected in: /System/Library/Frameworks/CoreMIDI.framework/CoreMIDI in /var/mobile/Applications/8F08B78E-929D-4C5A-9F02-08FD5743C17F/***.app/*** Dyld Version: 179.4 When the app launches, I listen for MIDI Network Sessions using [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(sessionDidChange:) name:MIDINetworkNotificationSessionDidChange object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(sessionDidChange:) name:MIDINetworkNotificationContactsDidChange object:nil]; Which seems to be what is causing the crash. This is after I call session = [MIDINetworkSession defaultSession]; session.enabled = YES; session.connectionPolicy = MIDINetworkConnectionPolicy_Anyone; MIDIClientCreate(CFSTR("MidiManager"), midiNotifyProc, (void*)self, &midiClientRef); This kind of looks like CoreMIDI library has not been included in the build. Problem is, it IS included in the build as a required framework. (And the deployment target is set to 4.2). I can run the build fine on my iPad and have been testing extensively with other users' iPads also with no problems whatsoever. Also, this is an update to an existing app that has had several updates already with no problems. I just double checked my deployment build and the framework is definitely included, and I just installed that build onto my iPad (with a different provisioning profile from the store) and it works fine also. What could be happening? Could it be that Xcode just did a bad build for the one I sent to Apple, or am I missing something obvious? Could I change the MIDINetworkNotificationSessionDidChange notification symbol to a literal string (@"MIDINetworkNotificationSessionDidChange") to fix things for the mean time? Thanks for any help!

    Read the article

  • Serialization of Entity Framework Models with .NET WCF Rest Service

    - by Chris Phillips
    I'm trying to put together a very simple REST-style interface for communicating with our partners. An example object in the API is a partner, which we'd like to have serialized like this: <partner> <id>ID</id> <name>NAME</name> </partner> This is fairly simply to achieve using the .NET 4.0 WCF REST template if we simply declare a partner class as: public class Partner { public int Id {get; set;} public string Name {get; set;} } But when I use the Entity Framework to define and store Partner objects, the resulting serialization looks something like this: <Partner p1:Id="NCNameString" p1:Ref="NCNameString" xmlns:p1="http://schemas.microsoft.com/2003/10/Serialization/" xmlns="http://schemas.datacontract.org/2004/07/TheTradeDesk.AdPlatform.Provisioning"> <EntityKey p1:Id="NCNameString" p1:Ref="NCNameString" xmlns="http://schemas.datacontract.org/2004/07/System.Data.Objects.DataClasses"> <EntityContainerName xmlns="http://schemas.datacontract.org/2004/07/System.Data">String content</EntityContainerName> <EntityKeyValues xmlns="http://schemas.datacontract.org/2004/07/System.Data"> ... This XML is obviously unacceptable for use as an external API. What are suggested mechanisms for using EF for the data store but maintaining a simple XML serialization interface?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19  | Next Page >