Search Results

Search found 5929 results on 238 pages for 'node webkit'.

Page 166/238 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • how to handle base tag target attribute in iphone uiwebview to open new window

    - by user217428
    When the links are supposed to open a new window, iphone uiwebview won't trigger an event when user click these links. We had to use javascript to do some trick to the target attribute of the links. I can handle 'a' tag to open in the '_self' window with the trick without problem. But when I do it the same way with the 'base' tag. it doesn't work. I believe the base target is set by the javascript. But the base tag is in the head, which may be handled by the uiwebview before my javascript executed, so the target change may not reflected in the webkit engine. Could someone please give some suggestion, so I can open the link in the same uiwebview? The following is the sample HTML opened in the uiwebview <html> <head> <base target='_blank'> </head> <body> <a href='http://google.ca'>google</a> </body> </html> The following is the code to be executed in the (void) webViewDidFinishLoad: (UIWebView*)webView static NSString* js = @"" "function bkModifyBaseTargets()" "{" "var allBases = window.document.getElementsByTagName('base');" "if (allBases)" "{" "for (var i = 0; i < allBases.length; i++)" "{" "base = allBases[i];" "target = base.getAttribute('target');" "if (target)" "{" "base.setAttribute('target', '_self');" "}" "}" "}" "}"; [webView stringByEvaluatingJavaScriptFromString: js]; [webView stringByEvaluatingJavaScriptFromString: @"bkModifyBaseTargets()"];

    Read the article

  • Correct XML serialization and deserialization of "mixed" types in .NET

    - by Stefan
    My current task involves writing a class library for processing HL7 CDA files. These HL7 CDA files are XML files with a defined XML schema, so I used xsd.exe to generate .NET classes for XML serialization and deserialization. The XML Schema contains various types which contain the mixed="true" attribute, specifying that an XML node of this type may contain normal text mixed with other XML nodes. The relevant part of the XML schema for one of these types looks like this: <xs:complexType name="StrucDoc.Paragraph" mixed="true"> <xs:sequence> <xs:element name="caption" type="StrucDoc.Caption" minOccurs="0"/> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="br" type="StrucDoc.Br"/> <xs:element name="sub" type="StrucDoc.Sub"/> <xs:element name="sup" type="StrucDoc.Sup"/> <!-- ...other possible nodes... --> </xs:choice> </xs:sequence> <xs:attribute name="ID" type="xs:ID"/> <!-- ...other attributes... --> </xs:complexType> The generated code for this type looks like this: /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(TypeName="StrucDoc.Paragraph", Namespace="urn:hl7-org:v3")] public partial class StrucDocParagraph { private StrucDocCaption captionField; private object[] itemsField; private string[] textField; private string idField; // ...fields for other attributes... /// <remarks/> public StrucDocCaption caption { get { return this.captionField; } set { this.captionField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute("br", typeof(StrucDocBr))] [System.Xml.Serialization.XmlElementAttribute("sub", typeof(StrucDocSub))] [System.Xml.Serialization.XmlElementAttribute("sup", typeof(StrucDocSup))] // ...other possible nodes... public object[] Items { get { return this.itemsField; } set { this.itemsField = value; } } /// <remarks/> [System.Xml.Serialization.XmlTextAttribute()] public string[] Text { get { return this.textField; } set { this.textField = value; } } /// <remarks/> [System.Xml.Serialization.XmlAttributeAttribute(DataType="ID")] public string ID { get { return this.idField; } set { this.idField = value; } } // ...properties for other attributes... } If I deserialize an XML element where the paragraph node looks like this: <paragraph>first line<br /><br />third line</paragraph> The result is that the item and text arrays are read like this: itemsField = new object[] { new StrucDocBr(), new StrucDocBr(), }; textField = new string[] { "first line", "third line", }; From this there is no possible way to determine the exact order of the text and the other nodes. If I serialize this again, the result looks exactly like this: <paragraph> <br /> <br />first linethird line </paragraph> The default serializer just serializes the items first and then the text. I tried implementing IXmlSerializable on the StrucDocParagraph class so that I could control the deserialization and serialization of the content, but it's rather complex since there are so many classes involved and I didn't come to a solution yet because I don't know if the effort pays off. Is there some kind of easy workaround to this problem, or is it even possible by doing custom serialization via IXmlSerializable? Or should I just use XmlDocument or XmlReader/XmlWriter to process these documents?

    Read the article

  • OpenVZ multiple networks on CTs

    - by picca
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help?

    Read the article

  • What ports to open in aws security group for aws?

    - by HarrisonJackson
    I am building the backend for a turn based gamed. My experience is mostly with a lamp stack; I've dabbled in nginx on a node side project. I just read Scaling PHP Applications by Stephen Corona of Twit Pic. He recommends an nginx server over apache. He says that his ubuntu machine has 32768-61000 ports open. On AWS do I need to modify my security to group to allow access to those ports? How do I ensure nginx is taking full advantage of this configuration?

    Read the article

  • Sharing information between nodes in Beowulf Cluster

    - by Alejandro Sazo
    I am setting up a beowulf cluster and I've been reading that it might be necessary to make the home directory of the cluster users shared between them (assuming this users are local to each machine). The other case is leave each user with its own home and the communication is up to the master node. Another idea that came up was to use an LDAP unique user logged on each machine in the cluster, that keeps the idea of the shared home between nodes (but is only one home of one user). Which approach is better for this kind of cluster? Edit: The cluster is running openmpi and it will support cuda and opencl

    Read the article

  • ADExplorer, how to search with "distinguishedName contains" condition?

    - by Jimm Chen
    I'm using ADExplorer 1.42 from Microsoft. I'm not very versed at this program so please kindly help me out with a search-related problem. Right click on a node(e.g., CN=NlscanStaff) and select Search Container... , with default search attributes, I can see all objects inside NlscanStaff listed as result. Note that there is a CN=CHJTEST object listed. Now, my question is, how to search for CHJTEST specifically? I tried search condition: Attribute : distinguishedName Relation : contains Value : CN=CHJTEST Click Add , then Search . But no result. Can someone tell me what's going wrong? Thanks.

    Read the article

  • Why use multiple partitions on a rhel server?

    - by Jakobud
    I'm about to reformat and reinstall CentOS onto an old server. The server runs on a modest 30 node small business network and has a variety of responsibilities including MySQL, a Samba share, DHCPd & SVN/Trac. The old sysadmin had this server setup with almost a dozen different partitions for various things. I'm trying to understand what the advantages of multiple partitions are as opposed to a just one filesystem at /. Speed? Flexibility? Security? It seems like if you misjudge the necessary size for any given partition and it ends up filling up too fast, it requires a sysadmin to go in and expand the partition, etc... Seems like it would be easier if everything was just one flat / filesystem. But I'm sure there are some advantages I'm not aware of. The server is currently running a handful of HDDs raided to ~2TB (raid 0).

    Read the article

  • Adding lines to /etc/profile with puppet?

    - by miku
    I use puppet to install a current JDK and tomcat. package { [ "openjdk-6-jdk", "openjdk-6-doc", "openjdk-6-jre", "tomcat6", "tomcat6-admin", "tomcat6-common", "tomcat6-docs", "tomcat6-user" ]: ensure => present, } Now I'd like to add JAVA_HOME="/usr/lib/java" export JAVA_HOME to /etc/profile, just to get this out of the way. I haven't found a straightforward answer in the docs, yet. Is there a recommended way to do this? In general, how do I tell puppet to place this file there or modify that file? I'm using puppet for a single node (in standalone mode) just to try it out and to keep a log of the server setup.

    Read the article

  • What is the crappiest network build you've ever seen?

    - by Ivan Petrushev
    This is only about networks you have seen personaly, not heared from others or seen on pictures at the web. Cables hanging from the ceiling lamps? Cables going trough culverts and other tubes? Switches and other equipment in the cleaner's closet? Cleaning lady's rags drying hanging from the cables? Key to the main node door possesed only by the janitor (or other non-tech and completely non-network-related guy)? Switches powered by foreign power adapters (cheaper and providing non-specified voltage or amperage)? All of this was in my old dormitory. Tell us about your bad experience.

    Read the article

  • 32bit domU on 64bit dom0

    - by ModuleC
    I'm using 64bit Centos Dom0 with: 2.6.18-164.15.1.el5xen #1 SMP Wed Mar 17 12:04:23 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux recently i migrated some 32bit Centos domUs on this node. As by specs 32bit domUs should work with 64bit dom0. DomUs are paravirtualized, and everything works, except iptables limit. Anyways runing csf on domU will return following messages in dmesg: ip_tables: (C) 2000-2006 Netfilter Core Team Netfilter messages via NETLINK v0.30. ip_conntrack version 2.4 (2080 buckets, 16640 max) - 304 bytes per conntrack ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 ip_tables: limit match: invalid size 40 != 28 Doing lsmod on both dom0 and domU is listing all iptables modules required as loaded. I've found this http://www.mail-archive.com/[email protected]/msg189433.html But didn't find anything for centos on this issue. Am I missing something?

    Read the article

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • hiera_include equivalent for resource types

    - by quickshiftin
    I'm using the yumrepo built-in type. I can get a basic integration to hiera working yumrepo { hiera('yumrepo::name') : metadata_expire => hiera('yumrepo::metadata_expire'), descr => hiera('yumrepo::descr'), gpgcheck => hiera('yumrepo::gpgcheck'), http_caching => hiera('yumrepo::http_caching'), baseurl => hiera('yumrepo::baseurl'), enabled => hiera('yumrepo::enabled'), } If I try to remove that definition and instead go for hiera_include('classes'), here's what I've got in the corresponding yaml backend classes: - "yumrepo" yumrepo::metadata_expire: 0 yumrepo::descr: "custom repository" yumrepo::gpgcheck: 0 yumrepo::http_caching: none yumrepo::baseurl: "http://myserver/custom-repo/$basearch" yumrepo::enabled: 1 I get this error on an agent Error 400 on SERVER: Could not find class yumrepo I guess you can't get away from some sort of minimal node declaration w/ hiera and resource types? Maybe hiera_hash is the way to go? I gave this a shot, but it produces a syntax error yumrepo { 'hnav-development': hiera_hash('yumrepo') }

    Read the article

  • Apache is running; however, it reports that it is not, and it will not restart.

    - by solo
    Apache is running; however, it reports that it is not, and it will not restart. # /etc/init.d/httpd status httpd.worker is stopped # /usr/sbin/lsof -iTCP:80 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME httpd.wor 1169 root 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1211 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1213 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1215 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1352 daemon 3u IPv6 2974 TCP *:http (LISTEN) #/etc/init.d/httpd restart Stopping httpd: [FAILED] Starting httpd: [Wed Mar 24 10:33:51 2010] [warn] module proxy_ajp_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs [FAILED] OS: Linux DISTRO: CENTOS 5 Restarting the server didn't help, nor did killing apache and starting it. Any idea what is causing this inconsistency?

    Read the article

  • Performance in backpropagation algorithm

    - by Taban
    I've written a matlab program for standard backpropagation algorithm, it is my homework and I should not use matlab toolbox, so I write the entire code by myself. This link helped me for backpropagation algorithm. I have a data set of 40 random number and initial weights randomly. As output, I want to see a diagram that shows the performance. I used mse and plot function to see performance for 20 epochs but the result is this: I heard that performance should go up through backpropagation, so I want to know is there any problem with my code or this result is normal because local minimums. This is my code: Hidden_node=inputdlg('Enter the number of Hidden nodes'); a=0.5;%initialize learning rate hiddenn=str2num(Hidden_node{1,1}); randn('seed',0); %creating data set s=2; N=10; m=[5 -5 5 5;-5 -5 5 -5]; S = s*eye(2); [l,c] = size(m); x = []; % Creating the training set for i = 1:c x = [x mvnrnd(m(:,i)',S,N)']; end % target value toutput=[ones(1,N) zeros(1,N) ones(1,N) zeros(1,N)]; for epoch=1:20; %number of epochs for kk=1:40; %number of patterns %initial weights of hidden layer for ii=1 : 2; for jj=1 :hiddenn; whidden{ii,jj}=rand(1); end end initial the wights of output layer for ii=1 : hiddenn; woutput{ii,1}=rand(1); end for ii=1:hiddenn; x1=x(1,kk); x2=x(2,kk); w1=whidden{1,ii}; w2=whidden{2,ii}; activation{1,ii}=(x1(1,1)*w1(1,1))+(x2(1,1)*w2(1,1)); end %calculate output of hidden nodes for ii=1:hiddenn; hidden_to_out{1,ii}=logsig(activation{1,ii}); end activation_O{1,1}=0; for jj=1:hiddenn; activation_O{1,1} = activation_O{1,1}+(hidden_to_out{1,jj}*woutput{jj,1}); end %calculate output out{1,1}=logsig(activation_O{1,1}); out_for_plot(1,kk)= out{1,ii}; %calculate error for output node delta_out{1,1}=(toutput(1,kk)-out{1,1}); %update weight of output node for ii=1:hiddenn; woutput{ii,jj}=woutput{ii,jj}+delta_out{1,jj}*hidden_to_out{1,ii}*dlogsig(activation_O{1,jj},logsig(activation_O{1,jj}))*a; end %calculate error of hidden nodes for ii=1:hiddenn; delta_hidden{1,ii}=woutput{ii,1}*delta_out{1,1}; end %update weight of hidden nodes for ii=1:hiddenn; for jj=1:2; whidden{jj,ii}= whidden{jj,ii}+(delta_hidden{1,ii}*dlogsig(activation{1,ii},logsig(activation{1,ii}))*x(jj,kk)*a); end end a=a/(1.1);%decrease learning rate end %calculate performance e=toutput(1,kk)-out_for_plot(1,1); perf(1,epoch)=mse(e); end plot(perf); Thanks a lot.

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • does heartbeat v3 support same resource agent types of pacemaker?

    - by Emre He
    As we know, Pacemaker supports three types of Resource Agents, ? LSB Resource Agents, ? OCF Resource Agents, ? legacy Heartbeat Resource Agents http://www.linux-ha.org/wiki/Resource_Agents does heartbeat v3 support above 3 types resource agent? or it only support LSB and legacy heartbeat resource agents? because we have only virtual ip and one service need to switch in ha cluster, so we decide not involve pacemaker, so we come to this question, for example we cannot monitor the application service by heartbeat, heartbeat only can handle to start it on active node. thanks, Emre

    Read the article

  • In Puppet, how would I secure a password variable (in this case a MySQL password)?

    - by Beaming Mel-Bin
    I am using Puppet to provision MySQL with a parameterised class: class mysql::server( $password ) { package { 'mysql-server': ensure => installed } package { 'mysql': ensure => installed } service { 'mysqld': enable => true, ensure => running, require => Package['mysql-server'], } exec { 'set-mysql-password': unless => "mysqladmin -uroot -p$password status", path => ['/bin', '/usr/bin'], command => "mysqladmin -uroot password $password", require => Service['mysqld'], } } How can I protect $password? Currently, I removed the default world readable permission from the node definition file and explicitly gave puppet read permission via ACL. I'm assuming others have come across a similar situation so perhaps there's a better practice.

    Read the article

  • can't connect to Sql Sever Management Express 2012

    - by Rare-Man
    i installed Sql Sever Management Express 2012 , but when i try to connect in Sql management studio enviroment , i have this error . TITLE: Connect to Server Cannot connect to .. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476 The system cannot find the file specified BUTTONS: OK ................................................................................... and in during installion i dont have option for select cluster !! this is my SQL Server Configuration Manager , my sql server service is empty ... And when get Remove a Failover Cluster Node , this error happened . http://oi57.tinypic.com/2lrvat.jpg

    Read the article

  • Configuring HAProxy with memcache with failover

    - by Lawrie Matthews
    I'm configuring a new set of servers for an existing Wordpress site, and it's been requested that memcache be available and made more resilient. The idea proposed is to have HAProxy send requests to one of the two servers; if that memcache instance is inaccessible, then it should switch to the second, but should not switch back to the first if it comes back up unless the second is then unavailable. This doesn't appear to be a particularly common use case and I've not found much along these lines except to possibly set up the first node with an enormous rise value, such as: server server1 10.112.58.16:11211 check inter 5s fall 3 rise 99999999 server server2 10.112.58.19:11211 check backup which falls over as expected when server1 is unavailable. It won't ever fall back to server1, though, even if server2 goes offline. Can this be made to work?

    Read the article

  • DB2 users and groups

    - by Arun Srini
    Just want to know everyone's experience and take on managing users/authentication on a multi-node db2 cluster with users groups. I have 17 apps in production (project based company, only 2 online apps), and some 30 users with 7 groups. prodsel - group that has select privilege on all tables produpdt - update group on selective tables (as required by the apps) proddel - delete prodins - insert permissions for the group Now what my company does is when an app uses certain user (called app1user), and needs select and insert privilege on a table, they 1. grant select and insert for prodsel, prodins respectively 2. add the user under those two groups... now this creates one to many relationship between user and privileges, and this app1user also gets select on other tables granted for the prodsel group. I know this is wrong. Before I explain, I need to know how this is done elsewhere. Please share your experiences, even if you use other Databases that uses OS level authentication.

    Read the article

  • How we should load the MFMailViewController in cocos2d ?

    - by srikanth rongali
    I am writing an app in using cocos2d. This method I have written for the selector goToFirstScreen: . The view is in landscape mode. I need to send an email. So, I need to launch the MFMailComposeViewController. I need it in portrait mode. But, the control is not entering in to viewDidLoad of the mailMe class. The problem is in goToScreen: method. But, I do not get where I am wrong ? -(void)goToFirstScreen:(id)sender { NSLog(@"goToFirstScreen: "); CCScene *Scene = [CCScene node]; CCLayer *Layer = [mailME node]; [Scene addChild:Layer]; [[CCDirector sharedDirector] setAnimationInterval:1.0/60]; [[CCDirector sharedDirector] pushScene: Scene]; } This is my mailMe class to launch mail controller #import <UIKit/UIKit.h> #import <MessageUI/MessageUI.h> #import <MessageUI/MFMailComposeViewController.h> #import "cocos2d.h" @interface mailME : CCLayer <MFMailComposeViewControllerDelegate> { UIViewController *mailComposer; } -(void)displayComposerSheet; -(void)launchMailAppOnDevice; @end #import "mailME.h" @implementation mailME -(void)viewDidLoad { NSLog(@"Enetrd in to mail"); Class mailClass = (NSClassFromString(@"MFMailComposeViewController")); if (mailClass != nil) { if ([mailClass canSendMail]) { [self displayComposerSheet]; } else { [self launchMailAppOnDevice]; } } else { [self launchMailAppOnDevice]; } } -(void)displayComposerSheet { CCDirector *director = [CCDirector sharedDirector]; [director pause]; [director stopAnimation]; [director.openGLView setUserInteractionEnabled:NO]; mailComposer = [[UIViewController alloc] init]; [mailComposer setView:[[CCDirector sharedDirector]openGLView]]; [mailComposer setModalTransitionStyle:UIModalTransitionStyleCoverVertical]; MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; [picker setSubject:@"Hello!"]; NSArray *toRecipients = [NSArray arrayWithObject:@"[email protected]"]; [picker setToRecipients:toRecipients]; NSString *emailBody = @"It is not working!"; [picker setMessageBody:emailBody isHTML:YES]; [mailComposer presentModalViewController:picker animated:NO]; [picker release]; } - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { switch (result) { case MFMailComposeResultCancelled: break; case MFMailComposeResultSaved: break; case MFMailComposeResultSent: break; case MFMailComposeResultFailed: break; default: break; } [mailComposer dismissModalViewControllerAnimated:NO]; [[UIApplication sharedApplication] setStatusBarOrientation:CCDeviceOrientationLandscapeLeft animated:NO]; CCDirector *director = [CCDirector sharedDirector]; [director.openGLView setUserInteractionEnabled:YES]; [director startAnimation]; [director resume]; [mailComposer.view.superview removeFromSuperview]; } -(void)launchMailAppOnDevice { NSString *recipients = @"mailto:[email protected]?&subject=Hello!"; NSString *body = @"&body=It is not working"; NSString *email = [NSString stringWithFormat:@"%@%@", recipients, body]; email = [email stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; [[UIApplication sharedApplication] openURL:[NSURL URLWithString:email]]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Android beginner: understanding MotionEvent actions

    - by Dave
    I am having trouble getting my activity to return a MotionEvent.ACTION_UP. Probably a beginner's error. In LogCat, I'm only seeing the ACTION_MOVE event (which is an int value of 3). I also see the X/Y coordinates. No ACTION_DOWN and no ACTION_UP. I looked everywhere for a solution. I found one question on a forum that seems to be the same as my issue, but no solution is proposed: http://groups.google.com/group/android-developers/browse_thread/thread/9a9c23e40f02c134/bf12b89561f204ad?lnk=gst&q=ACTION_UP#bf12b89561f204ad Here's my code: import android.app.Activity; import android.os.Bundle; import android.util.Log; import android.view.MotionEvent; import android.webkit.WebView; public class Brand extends Activity { public WebView webview; public float currentXPosition; public float currentYPosition; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); webview = new WebView(this); setContentView(webview); webview.loadUrl("file:///android_asset/Brand.html"); } @Override public boolean onTouchEvent(MotionEvent me) { int action = me.getAction(); currentXPosition = me.getX(); currentYPosition = me.getY(); Log.v("MotionEvent", "Action = " + action); Log.v("MotionEvent", "X = " + currentXPosition + "Y = " + currentYPosition); if (action == MotionEvent.ACTION_MOVE) { // do something } if (action == MotionEvent.ACTION_UP) { // do something } return true; } }

    Read the article

  • VPN Connection causes DNS to use wrong DNS server

    - by Bryan
    I have a Windows 7 PC on our company network (which is a member of our Active Directory). Everything works fine until I open a VPN connection to a customer's site. When I do connect, I lose network access to shares on the network, including directories such as 'Application Data' that we have a folder redirection policy for. As you can imagine, this makes working on the PC very difficult, as desktop shortcuts stop working, software stops working properly due to having 'Application Data' pulled from under it. Our network is routed (10.58.5.0/24), with other local subnets existing within the scope of 10.58.0.0/16. The remote network is on 192.168.0.0/24. I've tracked the issue down to being DNS related. As soon as I open the VPN tunnel, all my DNS traffic goes via the remote network, which explains the loss of local resources, but my question is, how can I force local DNS queries to go to our local DNS servers rather than our customers? The output of ipconfig /all when not connected to the VPN is below: Windows IP Configuration Host Name . . . . . . . . . . . . : 7k5xy4j Primary Dns Suffix . . . . . . . : mydomain.local Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : mydomain.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : mydomain.local Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : F0-4D-A2-DB-3B-CA DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::9457:c5e0:6f10:b298%10(Preferred) IPv4 Address. . . . . . . . . . . : 10.58.5.89(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 31 January 2012 15:55:47 Lease Expires . . . . . . . . . . : 10 February 2012 10:11:30 Default Gateway . . . . . . . . . : 10.58.5.1 DHCP Server . . . . . . . . . . . : 10.58.3.32 DHCPv6 IAID . . . . . . . . . . . : 250629538 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-AC-76-2D-F0-4D-A2-DB-3B-CA DNS Servers . . . . . . . . . . . : 10.58.3.32 10.58.3.33 NetBIOS over Tcpip. . . . . . . . : Enabled This is the output of the same command with the VPN tunnel connected: Windows IP Configuration Host Name . . . . . . . . . . . . : 7k5xy4j Primary Dns Suffix . . . . . . . : mydomain.local Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : mydomain.local PPP adapter Customer Domain: Connection-specific DNS Suffix . : customerdomain.com Description . . . . . . . . . . . : CustomerDomain Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.0.85(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 192.168.0.16 192.168.0.17 Primary WINS Server . . . . . . . : 192.168.0.17 NetBIOS over Tcpip. . . . . . . . : Disabled Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : mydomain.local Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : F0-4D-A2-DB-3B-CA DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::9457:c5e0:6f10:b298%10(Preferred) IPv4 Address. . . . . . . . . . . : 10.58.5.89(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 31 January 2012 15:55:47 Lease Expires . . . . . . . . . . : 10 February 2012 10:11:30 Default Gateway . . . . . . . . . : 10.58.5.1 DHCP Server . . . . . . . . . . . : 10.58.3.32 DHCPv6 IAID . . . . . . . . . . . : 250629538 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-AC-76-2D-F0-4D-A2-DB-3B-CA DNS Servers . . . . . . . . . . . : 10.58.3.32 10.58.3.33 NetBIOS over Tcpip. . . . . . . . : Enabled Routing table Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.58.5.1 10.58.5.89 20 10.58.5.0 255.255.255.0 On-link 10.58.5.89 276 10.58.5.89 255.255.255.255 On-link 10.58.5.89 276 10.58.5.255 255.255.255.255 On-link 10.58.5.89 276 91.194.153.42 255.255.255.255 10.58.5.1 10.58.5.89 21 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.0 192.168.0.95 192.168.0.85 21 192.168.0.85 255.255.255.255 On-link 192.168.0.85 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.58.5.89 276 224.0.0.0 240.0.0.0 On-link 192.168.0.85 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.58.5.89 276 255.255.255.255 255.255.255.255 On-link 192.168.0.85 276 The binding order for the interfaces is as follows: I've not configured the VPN tunnel to use the default gateway at the remote end, and network comms to nodes on both networks are fine. (i.e. I can ping any node on our network or the remote network). I've modified the PPTP connection properties to use the DNS servers 10.58.3.32 followed by 192.168.0.16, yet the query still goes to 192.168.0.16. Edit: The local resources that disappear are hosted on domain DFS roots, which might (or might not) be relevant.

    Read the article

  • Set attribute to all child elements via xsl:choose

    - by Camal
    Hi, assuming I got following XML file : <?xml version="1.0" encoding="ISO-8859-1" ?> <MyCarShop> <Car gender="Boy"> <Door>Lamborghini</Door> <Key>Skull</Key> </Car> <Car gender="Girl"> <Door>Normal</Door> <Key>Princess</Key> </Car> </MyCarShop> I want to perform a transformation so the xml looks like this : <?xml version="1.0" encoding="ISO-8859-1" ?> <MyCarShop> <Car gender="Boy"> <Door color="blue">Lamborghini</Door> <Key color="blue">Skull</Key> </Car> <Car gender="Girl"> <Door color="red">Normal</Door> <Key color="red">Princess</Key> </Car> </MyCarShop> So I want to add a color attribut to each subelement of Car depending on the gender information. I came up with this XSLT but it doesnt work : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" > <xsl:output method="xml" indent="yes"/> <!--<xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template>--> <xsl:template match="/"> <xsl:element name="MyCarShop"> <xsl:attribute name="version">1.0</xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="Car"> <xsl:element name="Car"> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="Door"> <xsl:element name="Door"> <xsl:attribute name="ViewSideIndicator"> <xsl:choose> <xsl:when test="gender = 'Boy' ">Front</xsl:when> <xsl:when test="gender = 'Girl ">Front</xsl:when> </xsl:choose> </xsl:attribute> </xsl:element> </xsl:template> <xsl:template match="Key"> <xsl:element name="Key"> <xsl:apply-templates/> </xsl:element> </xsl:template> </xsl:stylesheet> Does anybody know what might be wrong ? Thanks again!

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >