Search Results

Search found 295 results on 12 pages for 'ang mo'.

Page 6/12 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Paypal "Subscribe" button: Is it possible to let the subscriber set the amount?

    - by Šime Vidas
    I'm setting up a recurring payment option on my website. I'd like to have two options: Option 1 (for individuals): Fixed $6/mo subscription Option 2 (for organizations): A subscription where the amount is set by the subscriber PayPal's "Subscribe" button does not seem to allow that: When I leave the "Amount" field of the 2nd option empty, I get an error: So, is this not possible? Do all options require fixed amounts?

    Read the article

  • Flash Website Design in Online Business

    As Adobe has owned Macromedia, use of flash in websites has increased significantly. Flash represents the information in more interesting manner enhancing the visual value of a website. Non-stop mo... [Author: Alan Smith - Web Design and Development - May 27, 2010]

    Read the article

  • Fancy a laugh

    - by simonsabin
    I’m growing my first ever moustache for Movemb http://uk.movember.com/ You can see my mo space and pictures of the thing growing on my lip here http://uk.movember.com/mospace/6154809 If you would like to support me then please do make a donation and make this worth it. https://www.movember.com/uk/donate/payment/member_id/6154809/ Were not even half way through the month and not sure how I’ll get to the end. Please contribute to keep me motivated not to slip with the razor.   If anyone knows...(read more)

    Read the article

  • No Properties path set - looking in classpath

    - by Will
    For whatever reason my project has decided it cannot find my transaction.properties file. It is located in the : src/main/resource However it looks in looks in target/classes/ The file also resides yet throws the errors(see below) These all seem to stem from the whole in the init of code I have no acces to which is always fun. Anyone have any idea how to get past the whole: Using init file: /target/classes/transactions.properties com.atomikos.icatch.SysException: Error in init: Error during checkpointing at com.atomikos.icatch.imp.TransactionServiceImp.init(TransactionServiceImp.java:728) EDIT: The errors are mainly pointing at the atomikos path. I'll be honest I'm at a total loss as to what is actually happening under the hood so. It's rather melting. The two files are the same so it shouldn't really matter which file it uses, however I can view the first error line reference. public synchronized void init ( Properties properties ) throws SysException { Stack errors = new Stack (); this.properties_ = properties; try { recoverymanager_.init (); } catch ( LogException le ) { errors.push ( le ); throw new SysException ( "Error in init: " + le.getMessage (), errors ); } recoverCoordinators (); //initialized is now set in recover() //initialized_ = true; shuttingDown_ = false; control_ = new LogControlImp ( this ); // call recovery already, to make sure that the // RMI participants can start inquiring and replay recover (); notifyListeners ( true, false ); } Full error printout: Using init file: /target/classes/transactions.properties com.atomikos.icatch.SysException: Error in init: Error during checkpointing at com.atomikos.icatch.imp.TransactionServiceImp.init(TransactionServiceImp.java:728) at com.atomikos.icatch.imp.BaseTransactionManager.init(BaseTransactionManager.java:217) at com.atomikos.icatch.standalone.StandAloneTransactionManager.init(StandAloneTransactionManager.java:104) at com.atomikos.icatch.standalone.UserTransactionServiceImp.init(UserTransactionServiceImp.java:307) at com.atomikos.icatch.config.UserTransactionServiceImp.init(UserTransactionServiceImp.java:413) at com.atomikos.icatch.jta.UserTransactionManager.checkSetup(UserTransactionManager.java:90) at com.atomikos.icatch.jta.UserTransactionManager.init(UserTransactionManager.java:140) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1544) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:93) at com.citi.eq.mo.dcc.server.Main.main(Main.java:32) Nested exception is: com.atomikos.persistence.LogException: Error during checkpointing at com.atomikos.persistence.imp.FileLogStream.writeCheckpoint(FileLogStream.java:229) at com.atomikos.persistence.imp.StreamObjectLog.init(StreamObjectLog.java:185) at com.atomikos.persistence.imp.StateRecoveryManagerImp.init(StateRecoveryManagerImp.java:71) at com.atomikos.icatch.imp.TransactionServiceImp.init(TransactionServiceImp.java:725) at com.atomikos.icatch.imp.BaseTransactionManager.init(BaseTransactionManager.java:217) at com.atomikos.icatch.standalone.StandAloneTransactionManager.init(StandAloneTransactionManager.java:104) at com.atomikos.icatch.standalone.UserTransactionServiceImp.init(UserTransactionServiceImp.java:307) at com.atomikos.icatch.config.UserTransactionServiceImp.init(UserTransactionServiceImp.java:413) at com.atomikos.icatch.jta.UserTransactionManager.checkSetup(UserTransactionManager.java:90) at com.atomikos.icatch.jta.UserTransactionManager.init(UserTransactionManager.java:140) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1544) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:93) at com.citi.eq.mo.dcc.server.Main.main(Main.java:32) 08/05/2011 14:55:59.998 [main] [] [INFO ] [o.s.b.f.s.DefaultListableBeanFactory] Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@164dbd5: defining beans [gfiPropertyConfigurerCommon,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,ZtsListenerContainer,ztsMessageListener,dccMessageHandler,dccToRioPublisher,rioJmsTemplate,dccMessageTransformer,ztsFixtoRioTransformer,dateManager,ztsDropCopyConverterContextFactory,ZtsBlockListenerContainer,ztsblockdropCopyConverterContextFactory,ZasListenerContainer,zasMessageListener,zastoRIOMessageTransformer,zasDropCopyConverterContextFactory,ztsToDccJndiTemplate,ztsQcf,ztsBlockToDccJndiTemplate,ztsBlockQcf,zasToDccJndiTemplate,zasQcf,rioJndiTemplate,rioTcf,rioDestinationResolver,URO.ZTSTRADES.1_Producer,mbeanServer,jmxExporter,rules-execution-server-engine,rio-object,trade-validator-context,trade-validator,validation-rules-helper,javaxTransactionManager,javaxUserTransaction,springPlatformTransactionManager,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,org.springframework.scheduling.annotation.internalAsyncAnnotationProcessor,org.springframework.scheduling.annotation.internalScheduledAnnotationProcessor]; root of factory hierarchy 08/05/2011 14:56:00.013 [main] [] [INFO ] [o.s.jmx.export.MBeanExporter] Unregistering JMX-exposed beans on shutdown Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'javaxTransactionManager' defined in class path resource [eq-mo-dcc-server-context.xml]: Invocation of init method failed; nested exception is com.atomikos.icatch.SysException: Error in init(): Error in init: Error during checkpointing at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1420) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:93) at com.citi.eq.mo.dcc.server.Main.main(Main.java:32) Caused by: com.atomikos.icatch.SysException: Error in init(): Error in init: Error during checkpointing at com.atomikos.icatch.standalone.UserTransactionServiceImp.init(UserTransactionServiceImp.java:374) at com.atomikos.icatch.config.UserTransactionServiceImp.init(UserTransactionServiceImp.java:413) at com.atomikos.icatch.jta.UserTransactionManager.checkSetup(UserTransactionManager.java:90) at com.atomikos.icatch.jta.UserTransactionManager.init(UserTransactionManager.java:140) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1544) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417) ... 12 more Caused by: com.atomikos.icatch.SysException: Error in init: Error during checkpointing at com.atomikos.icatch.imp.TransactionServiceImp.init(TransactionServiceImp.java:728) at com.atomikos.icatch.imp.BaseTransactionManager.init(BaseTransactionManager.java:217) at com.atomikos.icatch.standalone.StandAloneTransactionManager.init(StandAloneTransactionManager.java:104) at com.atomikos.icatch.standalone.UserTransactionServiceImp.init(UserTransactionServiceImp.java:307) ... 22 more

    Read the article

  • Problem Solving: Algorithm Required Urgently, Plz Help

    - by user616417
    Problem Solving: I've been working on something since last week. I am stuck at a point, where I want to find the minimum number of airplanes required to carry out a flight schedule given below. Plz, try out the brainstorming, i need the algorithm really badly, i'm also short of time. Thank u in advance. The Schedule---- Flight #,From,To,Departure,Arrival,Days,Via 6E 204,Agartala,Delhi,10:15:00,13:55:00,Daily,Kolkata 6E 360,Agartala,Imphal,13:50:00,14:35:00,Mo Th Sa, 6E 204,Agartala,Kolkata,10:15:00,11:00:00,Daily, 6E 360,Agartala,Kolkata,13:50:00,16:15:00,Mo Th Sa,Imphal 6E 362,Agartala,Kolkata,15:25:00,16:15:00,Tu We Fr Su, 6E 153,Ahmedabad,Bangalore,17:00:00,19:00:00,Daily, 6E 212,Ahmedabad,Chennai,9:00:00,12:55:00,Daily,Mumbai 6E 154,Ahmedabad,Delhi,12:30:00,14:00:00,Daily, 6E 211,Ahmedabad,Jaipur,19:10:00,20:20:00,Daily, 6E 410,Ahmedabad,Kolkata,15:00:00,17:30:00,Daily, 6E 212,Ahmedabad,Mumbai,9:00:00,10:10:00,Daily, 6E 409,Ahmedabad,Pune,14:25:00,15:40:00,Ex Sat, 6E 154,Bangalore,Ahmedabad,10:00:00,12:00:00,Daily, 6E 277,Bangalore,Chennai,15:35:00,16:25:00,Daily, 6E 132,Bangalore,Delhi,6:00:00,8:25:00,Daily, 6E 102,Bangalore,Delhi,9:50:00,13:45:00,Ex Sat,Pune 6E 154,Bangalore,Delhi,10:00:00,14:00:00,Daily,Ahmedabad 6E 104,Bangalore,Delhi,11:30:00,14:10:00,Sat, 6E 122,Bangalore,Delhi,17:20:00,20:00:00,Daily, 6E 108,Bangalore,Delhi,19:20:00,23:10:00,Sat,Pune 6E 106,Bangalore,Delhi,19:30:00,22:00:00,Ex Sat, 6E 275,Bangalore,Goa,12:15:00,13:15:00,Daily, 6E 351,Bangalore,Hyderabad,8:25:00,9:25:00,Daily, 6E 152,Bangalore,Hyderabad,19:10:00,20:10:00,Ex Sat, 6E 152,Bangalore,Hyderabad,19:30:00,20:35:00,Sat, 6E 152,Bangalore,Jaipur,19:10:00,22:30:00,Ex Sat,Hyderabad 6E 152,Bangalore,Jaipur,19:30:00,22:30:00,Sat,Hyderabad 6E 351,Bangalore,Kolkata,8:25:00,11:55:00,Daily,Hyderabad 6E 277,Bangalore,Kolkata,15:35:00,19:15:00,Daily,Chennai 6E 402,Bangalore,Mumbai,6:05:00,7:45:00,Daily, 6E 275,Bangalore,Mumbai,12:15:00,14:45:00,Daily,Goa 6E 414,Bangalore,Mumbai,12:45:00,14:20:00,Daily, 6E 412,Bangalore,Mumbai,21:20:00,23:20:00,Daily, 6E 102,Bangalore,Pune,9:50:00,11:10:00,Ex Sat, 6E 108,Bangalore,Pune,19:20:00,20:40:00,Sat, 6E 258,Bhubaneshwar,Delhi,18:55:00,20:55:00,Daily, 6E 257,Bhubaneshwar,Hyderabad,10:40:00,12:05:00,Daily, 6E 257,Bhubaneshwar,Mumbai,10:40:00,13:50:00,Daily,Hyderabad 6E 211,Chennai,Ahmedabad,15:10:00,18:40:00,Daily,Mumbai 6E 275,Chennai,Bangalore,10:50:00,11:40:00,Daily, 6E 302,Chennai,Delhi,11:35:00,15:20:00,Daily,Hyderabad 6E 282,Chennai,Delhi,19:45:00,22:30:00,Daily, 6E 275,Chennai,Goa,10:50:00,13:15:00,Daily,Bangalore 6E 302,Chennai,Hyderabad,11:35:00,12:40:00,Daily, 6E 211,Chennai,Jaipur,15:10:00,20:20:00,Daily,Mumbai/Ahmedabad 6E 523,Chennai,Kolkata,8:20:00,10:30:00,Daily, 6E 277,Chennai,Kolkata,16:55:00,19:15:00,Daily, 6E 211,Chennai,Mumbai,15:10:00,16:50:00,Daily, 6E 524,Chennai,Pune,21:15:00,23:00:00,Daily, 6E 273,Delhi,Agartala,6:15:00,9:45:00,Daily,Kolkata 6E 153,Delhi,Ahmedabad,14:45:00,16:30:00,Daily, 6E 101,Delhi,Bangalore,6:30:00,9:10:00,Ex Sat, 6E 103,Delhi,Bangalore,6:45:00,10:40:00,Sat,Pune 6E 121,Delhi,Bangalore,9:30:00,12:10:00,Daily, 6E 105,Delhi,Bangalore,14:20:00,18:30:00,Ex Sat,Pune 6E 153,Delhi,Bangalore,14:45:00,19:00:00,Daily,Ahmedabad 6E 107,Delhi,Bangalore,15:55:00,18:40:00,Sat, 6E 131,Delhi,Bangalore,20:45:00,23:15:00,Daily, 6E 257,Delhi,Bhubaneshwar,8:10:00,10:10:00,Daily, 6E 301,Delhi,Chennai,7:00:00,11:05:00,Daily,Hyderabad 6E 283,Delhi,Chennai,16:30:00,19:05:00,Daily, 6E 181,Delhi,Goa,9:15:00,13:35:00,Daily,Mumbai 6E 333,Delhi,Goa,11:45:00,14:15:00,Daily, 6E 201,Delhi,Guwahati,5:30:00,7:50:00,Daily, 6E 301,Delhi,Hyderabad,7:00:00,9:00:00,Daily, 6E 257,Delhi,Hyderabad,8:10:00,12:05:00,Daily,Bhubaneshwar 6E 305,Delhi,Hyderabad,14:00:00,15:55:00,Daily, 6E 307,Delhi,Hyderabad,21:00:00,22:55:00,Daily, 6E 201,Delhi,Imphal,5:30:00,9:10:00,Daily,Guwahati 6E 305,Delhi,Kochi,14:00:00,18:25:00,Daily,Hyderabad 6E 273,Delhi,Kolkata,6:15:00,8:20:00,Daily, 6E 203,Delhi,Kolkata,15:00:00,17:05:00,Daily, 6E 209,Delhi,Kolkata,18:30:00,20:45:00,Daily, 6E 183,Delhi,Mumbai,6:45:00,8:35:00,Daily, 6E 181,Delhi,Mumbai,9:15:00,11:35:00,Daily, 6E 481,Delhi,Mumbai,10:50:00,13:50:00,Daily,Vadodara 6E 189,Delhi,Mumbai,14:45:00,16:50:00,Daily, 6E 187,Delhi,Mumbai,17:50:00,19:50:00,Daily, 6E 185,Delhi,Mumbai,20:15:00,22:20:00,Daily, 6E 135,Delhi,Nagpur,8:55:00,10:40:00,Ex Sat, 6E 103,Delhi,Pune,6:45:00,8:45:00,Sat, 6E 135,Delhi,Pune,8:55:00,12:30:00,Ex Sat,Nagpur 6E 105,Delhi,Pune,14:20:00,16:30:00,Ex Sat, 6E 481,Delhi,Vadodara,10:50:00,12:20:00,Daily, 6E 277,Goa,Bangalore,14:05:00,15:00:00,Daily, 6E 277,Goa,Chennai,14:05:00,16:25:00,Daily,Bangalore 6E 334,Goa,Delhi,14:45:00,17:10:00,Daily, 6E 277,Goa,Kolkata,14:05:00,19:15:00,Daily,Bangalore/Chennai 6E 275,Goa,Mumbai,13:45:00,14:45:00,Daily, 6E 202,Guwahati,Delhi,11:00:00,13:25:00,Daily, 6E 201,Guwahati,Imphal,8:25:00,9:10:00,Daily, 6E 208,Guwahati,Jaipur,12:40:00,16:55:00,Daily,Kolkata 6E 208,Guwahati,Kolkata,12:40:00,14:00:00,Daily, 6E 322,Guwahati,Kolkata,15:30:00,16:50:00,Daily, 6E 322,Guwahati,Mumbai,15:30:00,20:20:00,Daily,Kolkata 6E 151,Hyderabad,Bangalore,8:20:00,9:20:00,Daily, 6E 352,Hyderabad,Bangalore,19:40:00,20:40:00,Daily, 6E 258,Hyderabad,Bhubaneshwar,16:40:00,18:20:00,Daily, 6E 301,Hyderabad,Chennai,9:50:00,11:05:00,Daily, 6E 308,Hyderabad,Delhi,6:10:00,8:00:00,Daily, 6E 302,Hyderabad,Delhi,13:10:00,15:20:00,Daily, 6E 258,Hyderabad,Delhi,16:40:00,20:55:00,Daily,Bhubaneshwar 6E 306,Hyderabad,Delhi,21:00:00,23:05:00,Daily, 6E 152,Hyderabad,Jaipur,20:50:00,22:30:00,Ex Sat, 6E 152,Hyderabad,Jaipur,21:10:00,22:30:00,Sat, 6E 305,Hyderabad,Kochi,16:45:00,18:25:00,Daily, 6E 351,Hyderabad,Kolkata,9:55:00,11:55:00,Daily, 6E 257,Hyderabad,Mumbai,12:35:00,13:50:00,Daily, 6E 362,Imphal,Agartala,14:15:00,14:55:00,Tu We Fr Su, 6E 202,Imphal,Delhi,9:40:00,13:25:00,Daily,Guwahati 6E 202,Imphal,Guwahati,9:40:00,10:25:00,Daily, 6E 362,Imphal,Kolkata,14:15:00,16:15:00,Tu We Fr Su,Agartala 6E 360,Imphal,Kolkata,15:05:00,16:15:00,Mo Th Sa, 6E 212,Jaipur,Ahmedabad,7:30:00,8:35:00,Daily, 6E 151,Jaipur,Bangalore,6:00:00,9:20:00,Daily,Hyderabad 6E 212,Jaipur,Chennai,7:30:00,12:55:00,Daily,Mumbai/Ahmedabad 6E 207,Jaipur,Guwahati,8:20:00,12:10:00,Daily,Kolkata 6E 151,Jaipur,Hyderabad,6:00:00,7:40:00,Daily, 6E 207,Jaipur,Kolkata,8:20:00,10:10:00,Daily, 6E 323,Jaipur,Kolkata,17:35:00,23:00:00,Daily,Mumbai 6E 212,Jaipur,Mumbai,7:30:00,10:10:00,Daily,Ahmedabad 6E 323,Jaipur,Mumbai,17:35:00,19:15:00,Daily, 6E 306,Kochi,Delhi,19:00:00,23:05:00,Daily,Hyderabad 6E 306,Kochi,Hyderabad,19:00:00,20:30:00,Daily, 6E 273,Kolkata,Agartala,8:50:00,9:45:00,Daily, 6E 360,Kolkata,Agartala,12:30:00,13:20:00,Mo Th Sa, 6E 362,Kolkata,Agartala,12:30:00,14:55:00,TuWeFrSu,Imphal 6E 409,Kolkata,Ahmedabad,11:10:00,13:50:00,Daily, 6E 275,Kolkata,Bangalore,7:30:00,11:40:00,Daily,Chennai 6E 352,Kolkata,Bangalore,16:50:00,20:40:00,Daily,Hyderabad 6E 275,Kolkata,Chennai,7:30:00,9:50:00,Daily, 6E 524,Kolkata,Chennai,18:15:00,20:25:00,Daily, 6E 210,Kolkata,Delhi,7:45:00,10:05:00,Daily, 6E 204,Kolkata,Delhi,11:40:00,13:55:00,Daily, 6E 274,Kolkata,Delhi,19:45:00,22:10:00,Daily, 6E 275,Kolkata,Goa,7:30:00,13:15:00,Daily,Chennai/Bangalore 6E 207,Kolkata,Guwahati,10:50:00,12:10:00,Daily, 6E 321,Kolkata,Guwahati,13:00:00,14:20:00,Daily, 6E 352,Kolkata,Hyderabad,16:50:00,19:00:00,Daily, 6E 362,Kolkata,Imphal,12:30:00,13:45:00,Tu We Fr Su, 6E 360,Kolkata,Imphal,12:30:00,14:35:00,MoThSa,Agartala 6E 208,Kolkata,Jaipur,14:35:00,16:55:00,Daily, 6E 320,Kolkata,Mumbai,6:00:00,8:30:00,Daily, 6E 322,Kolkata,Mumbai,17:35:00,20:20:00,Daily, 6E 404,Kolkata,Mumbai,18:35:00,21:55:00,Daily,Nagpur 6E 404,Kolkata,Nagpur,18:35:00,20:05:00,Daily, 6E 409,Kolkata,Pune,11:10:00,15:40:00,Ex Sat,Ahmedabad 6E 524,Kolkata,Pune,18:15:00,23:00:00,Daily,Chennai 6E 211,Mumbai,Ahmedabad,17:40:00,18:40:00,Daily, 6E 411,Mumbai,Bangalore,6:20:00,7:50:00,Daily, 6E 413,Mumbai,Bangalore,15:00:00,16:40:00,Daily, 6E 415,Mumbai,Bangalore,21:05:00,22:40:00,Daily, 6E 258,Mumbai,Bhubaneshwar,14:30:00,18:20:00,Daily,Hyderabad 6E 212,Mumbai,Chennai,11:00:00,12:55:00,Daily, 6E 184,Mumbai,Delhi,6:15:00,8:15:00,Daily, 6E 180,Mumbai,Delhi,8:25:00,10:35:00,Daily, 6E 482,Mumbai,Delhi,9:25:00,12:35:00,Daily,Vadodara 6E 188,Mumbai,Delhi,14:25:00,16:35:00,Daily, 6E 186,Mumbai,Delhi,17:50:00,19:55:00,Daily, 6E 182,Mumbai,Delhi,21:15:00,23:20:00,Daily, 6E 181,Mumbai,Goa,12:35:00,13:35:00,Daily, 6E 321,Mumbai,Guwahati,9:20:00,14:20:00,Daily,Kolkata 6E 258,Mumbai,Hyderabad,14:30:00,16:00:00,Daily, 6E 207,Mumbai,Jaipur,5:55:00,7:40:00,Daily, 6E 211,Mumbai,Jaipur,17:40:00,20:20:00,Daily,Ahmedabad 6E 207,Mumbai,Kolkata,5:55:00,10:10:00,Daily,Jaipur 6E 321,Mumbai,Kolkata,9:20:00,12:00:00,Daily, 6E 403,Mumbai,Kolkata,15:35:00,18:50:00,Daily,Nagpur 6E 323,Mumbai,Kolkata,20:05:00,23:00:00,Daily, 6E 403,Mumbai,Nagpur,15:35:00,16:50:00,Daily, 6E 482,Mumbai,Vadodara,9:25:00,10:25:00,Daily, 6E 136,Nagpur,Delhi,18:10:00,19:40:00,Ex Sat, 6E 403,Nagpur,Kolkata,17:20:00,18:50:00,Daily, 6E 404,Nagpur,Mumbai,20:35:00,21:55:00,Daily, 6E 135,Nagpur,Pune,11:20:00,12:30:00,Ex Sat, 6E 410,Pune,Ahmedabad,13:10:00,14:30:00,Ex Sat, 6E 103,Pune,Bangalore,9:15:00,10:40:00,Sat, 6E 105,Pune,Bangalore,17:00:00,18:30:00,Ex Sat, 6E 523,Pune,Chennai,5:55:00,7:40:00,Daily, 6E 102,Pune,Delhi,11:45:00,13:45:00,Ex Sat, 6E 136,Pune,Delhi,16:15:00,19:40:00,Ex Sat,Nagpur 6E 108,Pune,Delhi,21:10:00,23:10:00,Sat, 6E 523,Pune,Kolkata,5:55:00,10:30:00,Daily,Chennai 6E 410,Pune,Kolkata,13:10:00,17:30:00,Ex Sat,Ahmedabad 6E 136,Pune,Nagpur,16:15:00,17:40:00,Ex Sat, 6E 482,Vadodara,Delhi,10:55:00,12:35:00,Daily, 6E 481,Vadodara,Mumbai,12:50:00,13:50:00,Daily,

    Read the article

  • Entry lvl. COBOL Control Breaks

    - by Kyle Benzle
    I'm working in COBOL with a double control break to print a hospital record. The input is one record per line, with, hospital info first, then patient info. There are multiple records per hospital, and multiple services per patient. The idea is, using a double control break, to print one hospital name, then all the patients from that hospital. Then print the patient name just once for all services, like the below. I'm having trouble with my output, and am hoping someone can help me get it in order. I am using AccuCobol to compile experts-exchange does not allow .cob and .dat so the extentions were changed to .txt The files are: the .cob lab5b.cob the input / output: lab5bin.dat, lab5bout.dat The assignment: http://www.cse.ohio-state.edu/~sgomori/314/lab5.html Hospital Number: 001 Hospital Name: Mount Carmel 00001 Griese, Brian Ear Infection 08/24/1999 300.00 Diaper Rash 09/05/1999 25.00 Frontal Labotomy 09/25/1999 25,000.00 Rear Labotomy 09/26/1999 25,000.00 Central Labotomy 09/28/1999 24,999.99 The total amount owed for this patient is: $.......... (End of Hospital) The total amount owed for this hospital is: $......... enter code here IDENTIFICATION DIVISION. PROGRAM-ID. LAB5B. ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT FILE-IN ASSIGN TO 'lab5bin.dat' ORGANIZATION IS LINE SEQUENTIAL. SELECT FILE-OUT ASSIGN TO 'lab5bout.dat' ORGANIZATION IS LINE SEQUENTIAL. DATA DIVISION. FILE SECTION. FD FILE-IN. 01 HOSPITAL-RECORD-IN. 05 HOSPITAL-NUMBER-IN PIC 999. 05 HOSPITAL-NAME-IN PIC X(20). 05 PATIENT-NUMBER-IN PIC 99999. 05 PATIENT-NAME-IN PIC X(20). 05 SERVICE-IN PIC X(30). 05 DATE-IN PIC 9(8). 05 OWED-IN PIC 9(7)V99. FD FILE-OUT. 01 REPORT-REC-OUT PIC X(100). WORKING-STORAGE SECTION. 01 WS-WORK-AREAS. 05 WS-HOLD-HOSPITAL-NUM PIC 999 VALUE ZEROS. 05 WS-HOLD-PATIENT-NUM PIC 99999 VALUE ZEROS. 05 ARE-THERE-MORE-RECORDS PIC XXX VALUE 'YES'. 88 MORE-RECORDS VALUE 'YES'. 88 NO-MORE-RECORDS VALUE 'NO '. 05 FIRST-RECORD PIC XXX VALUE 'YES'. 05 WS-PATIENT-TOTAL PIC 9(9)V99 VALUE ZEROS. 05 WS-HOSPITAL-TOTAL PIC 9(9)V99 VALUE ZEROS. 05 WS-PAGE-CTR PIC 99 VALUE ZEROS. 01 WS-DATE. 05 WS-YR PIC 9999. 05 WS-MO PIC 99. 05 WS-DAY PIC 99. 01 HL-HEADING1. 05 PIC X(49) VALUE SPACES. 05 PIC X(14) VALUE 'OHIO INSURANCE'. 05 PIC X(7) VALUE SPACES. 05 HL-PAGE PIC Z9. 05 PIC X(14) VALUE SPACES. 05 HL-DATE. 10 HL-MO PIC 99. 10 PIC X VALUE '/'. 10 HL-DAY PIC 99. 10 PIC X VALUE '/'. 10 HL-YR PIC X VALUE '/'. 01 HL-HEADING2. 05 PIC XXXXXXXXXX VALUE 'HOSPITAL: '. 05 HL-HOSPITAL PIC 999. 01 HL-HEADING3. 05 PIC X(7) VALUE "Patient". 05 PIC X(3) VALUE SPACES. 05 PIC X(7) VALUE "Patient". 05 PIC X(39) VALUE SPACES. 05 PIC X(7) VALUE "Date of". 05 PIC X(3) VALUE SPACES. 05 PIC X(6) VALUE "Amount". 01 HL-HEADING4. 05 PIC X(6) VALUE "Number". 05 PIC X(4) VALUE SPACES. 05 PIC X(4) VALUE "Name". 05 PIC X(18) VALUE SPACES. 05 PIC X(10) VALUE "Service". 05 PIC X(14) VALUE SPACES. 05 PIC X(8) VALUE "Service". 05 PIC X(2) VALUE SPACES. 05 PIC X(5) VALUE "Owed". 01 DL-PATIENT-LINE. 05 PIC X(28) VALUE SPACES. 05 DL-PATIENT-NUMBER PIC XXXXX. 05 PIC X(21) VALUE SPACES. 05 DL-PATIENT-TOTAL PIC $$$,$$$,$$9.99. 01 DL-HOSPITAL-LINE. 05 PIC X(47) VALUE SPACES. 05 PIC X(16) VALUE 'HOSPITAL TOTAL: '. 05 DL-HOSPITAL-TOTAL PIC $$$,$$$,$$9.99. PROCEDURE DIVISION. 100-MAIN-MODULE. PERFORM 600-INITIALIZATION-RTN PERFORM UNTIL NO-MORE-RECORDS READ FILE-IN AT END MOVE 'NO ' TO ARE-THERE-MORE-RECORDS NOT AT END PERFORM 200-DETAIL-RTN END-READ END-PERFORM PERFORM 400-HOSPITAL-BREAK PERFORM 700-END-OF-JOB-RTN STOP RUN. 200-DETAIL-RTN. EVALUATE TRUE WHEN FIRST-RECORD = 'YES' MOVE PATIENT-NUMBER-IN TO WS-HOLD-PATIENT-NUM MOVE HOSPITAL-NUMBER-IN TO WS-HOLD-HOSPITAL-NUM PERFORM 500-HEADING-RTN MOVE 'NO ' TO FIRST-RECORD WHEN HOSPITAL-NUMBER-IN NOT = WS-HOLD-HOSPITAL-NUM PERFORM 400-HOSPITAL-BREAK WHEN PATIENT-NUMBER-IN NOT = WS-HOLD-PATIENT-NUM PERFORM 300-PATIENT-BREAK END-EVALUATE ADD OWED-IN TO WS-PATIENT-TOTAL. 300-PATIENT-BREAK. MOVE WS-PATIENT-TOTAL TO DL-PATIENT-TOTAL MOVE WS-HOLD-PATIENT-NUM TO DL-PATIENT-NUMBER WRITE REPORT-REC-OUT FROM DL-PATIENT-LINE AFTER ADVANCING 2 LINES ADD WS-PATIENT-TOTAL TO WS-HOSPITAL-TOTAL IF MORE-RECORDS MOVE ZEROS TO WS-PATIENT-TOTAL MOVE PATIENT-NUMBER-IN TO WS-HOLD-PATIENT-NUM END-IF. 400-HOSPITAL-BREAK. PERFORM 300-PATIENT-BREAK MOVE WS-HOSPITAL-TOTAL TO DL-HOSPITAL-TOTAL WRITE REPORT-REC-OUT FROM DL-HOSPITAL-LINE AFTER ADVANCING 2 LINES IF MORE-RECORDS MOVE ZEROS TO WS-HOSPITAL-TOTAL MOVE HOSPITAL-NUMBER-IN TO WS-HOLD-HOSPITAL-NUM PERFORM 500-HEADING-RTN END-IF. 500-HEADING-RTN. ADD 1 TO WS-PAGE-CTR MOVE WS-PAGE-CTR TO HL-PAGE MOVE WS-HOLD-HOSPITAL-NUM TO HL-HOSPITAL WRITE REPORT-REC-OUT FROM HL-HEADING1 AFTER ADVANCING PAGE WRITE REPORT-REC-OUT FROM HL-HEADING2 AFTER ADVANCING 2 LINES. WRITE REPORT-REC-OUT FROM HL-HEADING3 AFTER ADVANCING 2 LINES. 600-INITIALIZATION-RTN. OPEN INPUT FILE-IN OUTPUT FILE-OUT *159 ACCEPT WS-DATE FROM DATE YYYYMMDD MOVE WS-YR TO HL-YR MOVE WS-MO TO HL-MO MOVE WS-DAY TO HL-DAY. 700-END-OF-JOB-RTN. CLOSE FILE-IN FILE-OUT.

    Read the article

  • Very simple shopping cart, remove button

    - by Kynian
    Im writing sales software that will be walking through a set of pages and on certain pages there are items listed to sell and when you click buy it basically just passes a hidden variable to the next page to be set as a session variable, and then when you get to the end it call gets reported to a database. However my employer wanted me to include a shopping cart, and this shopping cart should display the item name, sku, and price of whatever you're buying, as well as a remove button so the person doing the script doesnt need to go back through the entire thing to remove one item. At the moment I have the cart set to display everything, which was fairly simple. but I cant figure out how to get the remove button to work. Here is the code for the shopping cart: $total = 0; //TEST CODE: $_SESSION['itemname-addon'] = "Test addon"; $_SESSION ['price-addon'] = 10.00; $_SESSION ['sku-addon'] = "1234h"; $_SESSION['itemname-addon1'] = "Test addon1"; $_SESSION ['price-addon1'] = 99.90; $_SESSION ['sku-addon1'] = "1111"; $_SESSION['itemname-addon2'] = "Test addon2"; $_SESSION ['price-addon2'] = 19.10; $_SESSION ['sku-addon2'] = "123"; //end test code $items = Array ( "0"=> Array ( "name" => $_SESSION['itemname-mo'], "price" => $_SESSION ['price-mo'], "sku" => $_SESSION ['sku-mo'] ), "1" => Array ( "name" => $_SESSION['itemname-addon'], "price" => $_SESSION ['price-addon'], "sku" => $_SESSION ['sku-addon'] ), "2" => Array ( "name" => $_SESSION['itemname-addon1'], "price" => $_SESSION ['price-addon1'], "sku" => $_SESSION ['sku-addon1'] ), "3" => Array ( "name" => $_SESSION['itemname-addon2'], "price" => $_SESSION ['price-addon2'], "sku" => $_SESSION ['sku-addon2'] ) ); $a_length = count($items); for($x = 0; $x<$a_length; $x++){ $total +=$items[$x]['price']; } $formattedtotal = number_format($total,2,'.',''); for($i = 0; $i < $a_length; $i++){ $name = $items[$i]['name']; $price = $items[$i]['price']; $sku = $items[$i]['sku']; displaycart($name,$price,$sku); } echo "<br /> <b>Sub Total:</b> $$formattedtotal"; function displaycart($name,$price,$sku){ if($name != null || $price != null || $sku != null){ if ($name == "no sale" || $price == "no sale" || $sku == "no sale"){ echo ""; } else{ $formattedprice = number_format($price,2,'.',''); echo "$name: $$formattedprice ($sku)"; echo "<form action=\"\" method=\"post\">"; echo "<button type=\"submit\" />Remove</button><br />"; echo "</form>"; } } } So at this point Im not sure where to go from here for the remove button. Any suggestions would be appreciated.

    Read the article

  • Zend_Translate scan translation files

    - by nute
    I've trying to use Zend_Translate from Zend Framework I am using "POEdit" to generate "gettext" translation files. My files are under /www/mysite.com/webapps/includes/locale (this path is in my include path). I have: pictures.en.mo pictures.en.po (I plan on having pictures.es.mo soon) It all works fine if I manually do addTranslation() for each file. However I want to use the automatic file scanning method. I tried both of those: <?php /*Localization*/ require_once 'Zend/Translate.php'; require_once 'Zend/Locale.php'; define('LOCALE','/www/mysite.com/webapps/includes/locale'); if(!empty($_GET['locale'])){ $locale = new Zend_Locale($_GET['locale']); } else{ $locale = new Zend_Locale(); } $translate = new Zend_Translate('gettext', LOCALE, null, array('scan' => Zend_Translate::LOCALE_FILENAME)); if ( $translate->isAvailable( $locale->getLanguage() ) ){ $translate->setLocale($locale); } else{ $translate->setLocale('en'); } And this: <?php /*Localization*/ require_once 'Zend/Translate.php'; require_once 'Zend/Locale.php'; define('LOCALE','/www/mysite.com/webapps/includes/locale'); if(!empty($_GET['locale'])){ $locale = new Zend_Locale($_GET['locale']); } else{ $locale = new Zend_Locale(); } $translate = new Zend_Translate('gettext', LOCALE); if ( $translate->isAvailable( $locale->getLanguage() ) ){ $translate->setLocale($locale); } else{ $translate->setLocale('en'); } In both cases, I get a Notice: No translation for the language 'en' available. in /www/mysite.com/webapps/includes/Zend/Translate/Adapter.php on line 411 It also worked if I tried to do directory scanning.

    Read the article

  • In .net what are the difference between Eventlog and ManagementObject for retriving logs from remote

    - by Mitesh Patel
    I have found out following two ways for getting Application Event log entries from remote server. 1. Using EventLog object string logType = "Application"; EventLog ev = new EventLog(logType,"rspl200"); EventLogEntryCollection evColl = ev.Entries 2. Using ManagementObjectSearcher object ConnectionOptions co = new ConnectionOptions(); co.Username = "testA"; co.Password = "testA"; ManagementScope scope = new ManagementScope(@"\" + "machineName"+ @"\root\cimv2", co); scope.Connect(); SelectQuery query = new SelectQuery(@"select * from Win32_NtLogEvent"); EnumerationOptions opt = new EnumerationOptions(); opt.BlockSize = 1000; using (ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query,opt)) { foreach (ManagementObject mo in searcher.Get()) { // write down log entries Console.Writeline(mo["EventCode"]); } } I can easily get remote eventlog using method #1 (Using EventLog object) without any security access denied exception. But using method #2 (Using ManagementObjectSearcher object) i get access denied exception. Actually I want remote event log (only application and also latest log not all application logs) to be displayed in treeview like below - ServerName - Logs + Error + Information + Warning Can anybody help me in this to find out best way from this or any other? Also the main thing is that user who reads remote logs may be in different domain than server. Thanks Mitesh Patel

    Read the article

  • problem in split text by php

    - by moustafa
    Hi Team, I need to split the below contents by using the mobile number and email id. If the mobile number and email id found the content is splitted into some blocks. I am using the preg_match_all to check the mobile number and email id. For example : The Given Content: WANTED B'ful Non W'kingEdu Fair Girl for MBA 28/175 H'some Fair Mangal Gotra Boy Well estd. B'ness HI Income Status family Email: [email protected] PQM4 h'some Convent Educated B.techIIT, MS/28/5'9" Wkg in US frm repu fmlyseek b'ful,tall g i r l . Mo:09893029129 Em:[email protected] 5' 2" to 5' 5" b'ful QLFD girl for h a n d s o m e 5' 8" BPT (I) MPT G.Medalist (AUS) wkg in Australia Physiotherapist boy from V.respectable fmly of Surat / DeUli. Tel: 09825147614. EmaU: [email protected] The Desired output should be: 1st block: WANTED B'ful Non W'kingEdu Fair Girl for MBA 28/175 H'some Fair Mangal Gotra Boy Well estd. B'ness HI Income Status family Email: [email protected] 2nd block: PQM4 h'some Convent Educated B.techIIT, MS/28/5'9" Wkg in US frm repu fmlyseek b'ful,tall g i r l . Mo:09893029129 Em:[email protected] 3rd block: 5' 2" to 5' 5" b'ful QLFD girl for h a n d s o m e 5' 8" BPT (I) MPT G.Medalist (AUS) wkg in Australia Physiotherapist boy from V.respectable fmly of Surat / DeUli. Tel: 09825147614. EmaU: [email protected] I do no how to split it.I need some logics. When i am using preg_match_all when ever a mobile number found i inserted a new line and split the blocks.But email id is splitted independently.

    Read the article

  • PHP gettext on Windows

    - by Axsuul
    There's some tutorials out there for gettext (w/ Poedit)... unfortunately, it's mostly for a UNIX environment. And even more unfortunate is that I am running my WAMP server on Windows XP (but I am developing for a UNIX environment) and none of the tutorials can get gettext working properly for me. From the man page (http://us3.php.net/manual/en/book.gettext.php), it appears that it's a different process on a Windows environment. I've tried out some of the solutions in the comments but I still can't get it to work! Please, I've spent many hours on this, hopefully someone can point me in the right direction to get this thing to work! (and I'm sure there are others out there who share my frustration). So far with my setup, I'm only getting output "Hello World!" whereas I should be getting the translated string. Here is my setup/code so far: <?php // test.php if (!defined('LC_MESSAGES')) { define('LC_MESSAGES', 6); } $locale = "deu_DEU"; // apparently the locales are different on a WINDOWS platform putenv("LC_ALL=$locale"); setlocale(LC_ALL, $locale); bindtextdomain("greetings", ".\locale"); textdomain("greetings"); echo _("Hello World"); ?> Folder structure root: C:\Program Files\WampServer 2\www test.php: C:\Program Files\WampServer 2\www\site .po: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.po .mo: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.mo Please advise! Thanks for your time :)

    Read the article

  • Why is a CoreData forceFetch required after a delete on the iPad but not the iPhone?

    - by alyoshak
    When the following code is run on the iPhone the count of fetched objects after the delete is one less than before the delete. But on the iPad the count remains the same. This inconsistency was causing a crash on the iPad because elsewhere in the code, soon after the delete, fetchedObjects is called and the calling code, trusting the count, attempts access to the just-deleted object's properties, resulting in a NSObjectInaccessibleException error (see below). A fix has been to use that commented-out call to performFetch, which when executed makes the second call to fetchObjects yield the same result as on the iPhone without it. My question is: Why is the iPad producing different results than the iPhone? This is the second of these differences that I've discovered and posted recently. -(NSError*)deleteObject:(NSManagedObject*)mo; { NSLog(@"\n\nNum objects in store before delete: %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); [self.managedObjectContext deleteObject:mo]; // Save the context. NSError *error = nil; if (![self.managedObjectContext save:&error]) { } // [self.fetchedResultsController performFetch:&error]; // force a fetch NSLog(@"\n\nNum objects in store after delete (and save): %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); return error; } (The full NSObjectInaccessibleException is: "Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x1dcf90 <x-coredata://DC02B10D-555A-4AB8-8BC4-F419C4982794/Blueprint/p"

    Read the article

  • PHP gettext on Windows

    - by Axsuul
    There's some tutorials out there for gettext (w/ Poedit)... unfortunately, it's mostly for a UNIX environment. And even more unfortunate is that I am running my WAMP server on Windows XP (but I am developing for a UNIX environment) and none of the tutorials can get gettext working properly for me. From the man page (http://us3.php.net/manual/en/book.gettext.php), it appears that it's a different process on a Windows environment. I've tried out some of the solutions in the comments but I still can't get it to work! Please, I've spent many hours on this, hopefully someone can point me in the right direction to get this thing to work! (and I'm sure there are others out there who share my frustration). So far with my setup, I'm only getting output "Hello World!" whereas I should be getting the translated string. Here is my setup/code so far: <?php // test.php if (!defined('LC_MESSAGES')) { define('LC_MESSAGES', 6); } $locale = "deu_DEU"; // apparently the locales are different on a WINDOWS platform putenv("LC_ALL=$locale"); setlocale(LC_ALL, $locale); bindtextdomain("greetings", ".\locale"); textdomain("greetings"); echo _("Hello World"); ?> Folder structure root: C:\Program Files\WampServer 2\www test.php: C:\Program Files\WampServer 2\www\site .po: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.po .mo: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.mo Please advise! Thanks for your time :)

    Read the article

  • typeahead.js remote with subset matching

    - by rebelde
    Instead of returning to the server after each additional letter is typed, I want it to only go to the server once, get all matching words, and filter the downloaded data after that. We are having trouble making this work. We are successfully using "remote" to wait until two letters are typed, but we can't get it to stop going to the server as additional letters are typed. Steps: 1. After two letters are typed, retrieve all matching words that start with those two letters. 2. When a third and additional letters are typed, don't go to the server again, just filter from the previous list that was sent. An example: "mo" is typed in. All 100 words that start with "mo" are returned. (Only 10 are shown.) "mor" - now with a third letter, we don't go back to the server. We just find the 20 words that match from within the previous set of words. Can anybody make this work? In real life (using YUI2), we do this and then go back to the server if somebody types in a space after the word. At that point, we know to retrieve additional words. Thanks!

    Read the article

  • CRM 2011 - Set/Retrieve work hours programmatically

    - by Philip Rich
    I am attempting to retrieve a resources work hours to perform some logic I require. I understand that the CRM scheduling engine is a little clunky around such things, but I assumed that I would be able to find out how the working hours were stored in the DB eventually... So a resource has associated calendars and those calendars have associated calendar rules and inner calendars etc. It is possible to look at the start/end and frequency of aforementioned calendar rules and query their codes to work out whether a resource is 'working' during a given period. However, I have not been able to find the actual working hours, the 9-5 shall we say in any field in the DB. I even tried some SQL profiling while I was creating a new schedule for a resource via the UI, but the results don't show any work hours passing to SQL. For those with the patience the intercepted SQL statement is below:- EXEC Sp_executesql N'update [CalendarRuleBase] set [ModifiedBy]=@ModifiedBy0, [EffectiveIntervalEnd]=@EffectiveIntervalEnd0, [Description]=@Description0, [ModifiedOn]=@ModifiedOn0, [GroupDesignator]=@GroupDesignator0, [IsSelected]=@IsSelected0, [InnerCalendarId]=@InnerCalendarId0, [TimeZoneCode]=@TimeZoneCode0, [CalendarId]=@CalendarId0, [IsVaried]=@IsVaried0, [Rank]=@Rank0, [ModifiedOnBehalfBy]=NULL, [Duration]=@Duration0, [StartTime]=@StartTime0, [Pattern]=@Pattern0 where ([CalendarRuleId] = @CalendarRuleId0)', N'@ModifiedBy0 uniqueidentifier,@EffectiveIntervalEnd0 datetime,@Description0 ntext,@ModifiedOn0 datetime,@GroupDesignator0 ntext,@IsSelected0 bit,@InnerCalendarId0 uniqueidentifier,@TimeZoneCode0 int,@CalendarId0 uniqueidentifier,@IsVaried0 bit,@Rank0 int,@Duration0 int,@StartTime0 datetime,@Pattern0 ntext,@CalendarRuleId0 uniqueidentifier', @ModifiedBy0='EB04662A-5B38-E111-9889-00155D79A113', @EffectiveIntervalEnd0='2012-01-13 00:00:00', @Description0=N'Weekly Single Rule', @ModifiedOn0='2012-03-12 16:02:08', @GroupDesignator0=N'FC5769FC-4DE9-445d-8F4E-6E9869E60857', @IsSelected0=1, @InnerCalendarId0='3C806E79-7A49-4E8D-B97E-5ED26700EB14', @TimeZoneCode0=85, @CalendarId0='E48B1ABF-329F-425F-85DA-3FFCBB77F885', @IsVaried0=0, @Rank0=2, @Duration0=1440, @StartTime0='2000-01-01 00:00:00', @Pattern0=N'FREQ=WEEKLY;INTERVAL=1;BYDAY=SU,MO,TU,WE,TH,FR,SA', @CalendarRuleId0='0A00DFCF-7D0A-4EE3-91B3-DADFCC33781D' The key parts in the statement are the setting of the pattern:- @Pattern0=N'FREQ=WEEKLY;INTERVAL=1;BYDAY=SU,MO,TU,WE,TH,FR,SA' However, as mentioned, no indication of the work hours set. Am I thinking about this incorrectly or is CRM doing something interesting around these work hours? Any thoughts greatly appreciated, thanks.

    Read the article

  • How to do i18n and create Windows Installer of Haskell programs?

    - by Aufheben
    I'm considering using Haskell to develop for a little commercial project. The program must be internationalized (to Simplified Chinese, to be specific), and my customer requests that it should be delivered in a one-click Windows Installer form. So basically these are the two problems I'm facing now: I18n of Haskell programs: the method described in Internationalization of Haskell programs did work (partially) if I change the command of executing the program from LOCALE=zh_CN.UTF-8 ./Main to LANG=zh_CN.UTF-8 ./Main (I'm working on Ubuntu 10.10), however, the Chinese output is garbled, and I've no idea why is that. Distribution on Windows: I'm used to work under Linux and build & package my Haskell programs using Cabal, but what is the most natural way to create a one-click Windows Installer from a cabalized Haskell package? Is the package bamse the right way to go? ------ Details for the first problem ------ What I did was: $ hgettext -k __ -o messages.pot Main.hs $ msginit --input=messages.pot --locale=zh_CN.UTF-8 (Edit the zh_CN.po file, adding Chinese translation) $ mkdir -p zh_CN/LC_MESSAGES $ msgfmt --output-file=zh_CN/LC_MESSAGES/hello.mo zh_CN.po $ ghc --make Main.hs $ LANG=zh_CN.UTF-8 ./Main And the output was like: This indicates gettext is actually working, but for some reason the generated zh_CN.mo file is broken (my guess). I'm pretty sure my zh_CN.po file is encoded in UTF-8. Plus, aside from using System.IO.putStrLn, I also tried System.IO.UTF8.putStrLn to output the string, which didn't work either.

    Read the article

  • How to separate date in php

    - by user225269
    I want to be able to separate the birthday from the mysql data into day, year and month. Using the 3 textbox in html. How do I separate it? I'm trying to think of what can I do with the code below to show the result that I want: Here's the html form with the php code: $idnum = mysql_real_escape_string($_POST['idnum']); mysql_select_db("school", $con); $result = mysql_query("SELECT * FROM student WHERE IDNO='$idnum'"); $month = mysql_real_escape_string($_POST['mm']); ?> <?php while ( $row = mysql_fetch_array($result) ) { ?> <tr> <td width="30" height="35"><font size="2">Month:</td> <td width="30"><input name="mo" type="text" id="mo" onkeypress="return handleEnter(this, event)" value="<?php echo $month = explode("-",$row['BIRTHDAY']);?>"> As you can see the column is the mysql database is called BIRTHDAY. With this format: YYYY-MM-DD How do I do it. So that the data from the single column will be divided into three parts? Please help thanks,

    Read the article

  • Get the wireless adapter frequency-band mode in Windows 7

    - by Petr
    I have a dual-band 802.11n router operating in both frequency bands with the same SSID. My Windows 7 laptop has a dual-band wireless adapter. Is there a way to find out which frequency band is the one currently used by the client? Also, is there a way to set the preffered band? Thanks. EDIT1: The router is Linksys E4200 (V2) and I essentially want to keep the 2.4GHz band in the mixed-mode so that the legacy 802.11g devices can connect while allocating the 5.0GHz band for 802.11n devices. The laptop's adapter (Intel Centrino Ultimate-N 6300 ANG) can operate in both 802.11n modes. EDIT2: Related topic

    Read the article

  • Restore boot sector from a hard disc to another

    - by giang.asl.8
    I have a win7 on my old Seagate HDD. Recently I installed one new SSD and setup win8 on it. So I have a boot table to choose win7 or win8 to startup. Now when I tried to remove the old one (the Seagate), I can't boot into windows any more. I just have a blinking underscore in boot screen, forever ang forever. I guess the reason is that the boot sector, or boot table (or something like that) was installed on the old HDD. So may someone show me how to boot into my win8 without reinstall the old HDD.

    Read the article

  • Wifi channel interference

    - by artfulrobot
    In my neighbourhood there are: 11 wifi signals on channel 1 2 wifi signals on channel 4 (including mine at the mo) 8 on channel 6 6 on channel 11 According to the diagram on wikipedia Mine on channel 4 will suffer interference from channel 1 and channel 6, so a total of 20 other networks(!). So would I be better to join channel 11, even though my network is then in direct competition with the 6 others? I suppose the question is: what's worse: direct interference (meaning that on the same channel) from 6 or fringe interference from many more networks?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Google Developers SXSW LEGO Rumble

    Google Developers SXSW LEGO Rumble The Google Developers LEGO® MINDSTORMS® rumble returns to SXSW this year with even more epic proportions. After teams spend the day building LEGO race bots controlled by Android, the bots will compete in the ultimate showdown to determine the victors. We'll be broadcasting live the main event with multiple camera angles, slow-mo replay, interviews with the teams, and commentary from judges and attendees to give you an insider pass to all the action. You won't want to miss this showdown. More information can be found at: www.google.com From: GoogleDevelopers Views: 11238 182 ratings Time: 01:37:01 More in Entertainment

    Read the article

  • Find only physical network adapters with WMI Win32_NetworkAdapter class

    - by Mladen Prajdic
    WMI is Windows Management Instrumentation infrastructure for managing data and machines. We can access it by using WQL (WMI querying language or SQL for WMI). One thing to remember from the WQL link is that it doesn't support ORDER BY. This means that when you do SELECT * FROM wmiObject, the returned order of the objects is not guaranteed. It can return adapters in different order based on logged-in user, permissions of that user, etc… This is not documented anywhere that I've looked and is derived just from my observations. To get network adapters we have to query the Win32_NetworkAdapter class. This returns us all network adapters that windows detect, real and virtual ones, however it only supplies IPv4 data. I've tried various methods of combining properties that are common on all systems since Windows XP. The first thing to do to remove all virtual adapters (like tunneling, WAN miniports, etc…) created by Microsoft. We do this by adding WHERE Manufacturer!='Microsoft' to our WMI query. This greatly narrows the number of adapters we have to work with. Just on my machine it went from 20 adapters to 5. What was left were one real physical Realtek LAN adapter, 2 virtual adapters installed by VMware and 2 virtual adapters installed by VirtualBox. If you read the Win32_NetworkAdapter help page you'd notice that there's an AdapterType that enumerates various adapter types like LAN or Wireless and AdapterTypeID that gives you the same information as AdapterType only in integer form. The dirty little secret is that these 2 properties don't work. They are both hardcoded, AdapterTypeID to "0" and AdapterType to "Ethernet 802.3". The only exceptions I've seen so far are adapters that have no values at all for the two properties, "RAS Async Adapter" that has values of AdapterType = "Wide Area Network" and AdapterTypeID = "3" and various tunneling adapters that have values of AdapterType = "Tunnel" and AdapterTypeID = "15". In the help docs there isn't even a value for 15. So this property was of no help. Next property to give hope is NetConnectionId. This is the name of the network connection as it appears in the Control Panel -> Network Connections. Problem is this value is also localized into various languages and can have different names for different connection. So both of these properties don't help and we haven't even started talking about eliminating virtual adapters. Same as the previous one this property was also of no help. Next two properties I checked were ConfigManagerErrorCode and NetConnectionStatus in hopes of finding disabled and disconnected adapters. If an adapter is enabled but disconnected the ConfigManagerErrorCode = 0 with different NetConnectionStatus. If the adapter is disabled it reports ConfigManagerErrorCode = 22. This looked like a win by using (ConfigManagerErrorCode=0 or ConfigManagerErrorCode=22) in our condition. This way we get enabled (connected and disconnected adapters). Problem with all of the above properties is that none of them filter out the virtual adapters installed by virtualization software like VMware and VirtualBox. The last property to give hope is PNPDeviceID. There's an interesting observation about physical and virtual adapters with this property. Every virtual adapter PNPDeviceID starts with "ROOT\". Even VMware and VirtualBox ones. There were some really, really old physical adapters that had PNPDeviceID starting with "ROOT\" but those were in pre win XP era AFAIK. Since my minimum system to check was Windows XP SP2 I didn't have to worry about those. The only virtual adapter I've seen to not have PNPDeviceID start with "ROOT\" is the RAS Async Adapter for Wide Area Network. But because it is made by Microsoft we've eliminated it with the first condition for the manufacturer. Using the PNPDeviceID has so far proven to be really effective and I've tested it on over 20 different computers of various configurations from Windows XP laptops with wireless and bluetooth cards to virtualized Windows 2008 R2 servers. So far it always worked as expected. I will appreciate you letting me know if you find a configuration where it doesn't work. Let's see some C# code how to do this: ManagementObjectSearcher mos = null;// WHERE Manufacturer!='Microsoft' removes all of the // Microsoft provided virtual adapters like tunneling, miniports, and Wide Area Network adapters.mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft'");// Trying the ConfigManagerErrorCode and NetConnectionStatus variations // proved to still not be enough and it returns adapters installed by // the virtualization software like VMWare and VirtualBox// ConfigManagerErrorCode = 0 -> Device is working properly. This covers enabled and/or disconnected devices// ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0 -> Device is disabled and Disconnected. // Some virtual devices report ConfigManagerErrorCode = 22 (disabled) and some other NetConnectionStatus than 0mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND (ConfigManagerErrorCode = 0 OR (ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0))");// Final solution with filtering on the Manufacturer and PNPDeviceID not starting with "ROOT\"// Physical devices have PNPDeviceID starting with "PCI\" or something else besides "ROOT\"mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND NOT PNPDeviceID LIKE 'ROOT\\%'");// Get the physical adapters and sort them by their index. // This is needed because they're not sorted by defaultIList<ManagementObject> managementObjectList = mos.Get() .Cast<ManagementObject>() .OrderBy(p => Convert.ToUInt32(p.Properties["Index"].Value)) .ToList();// Let's just show all the properties for all physical adapters.foreach (ManagementObject mo in managementObjectList){ foreach (PropertyData pd in mo.Properties) Console.WriteLine(pd.Name + ": " + (pd.Value ?? "N/A"));}   That's it. Hope this helps you in some way.

    Read the article

  • Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_

    - by Patrick Olurotimi Ige
    I was writing a store proc for a report and i needed some data from another server so i added a linked server to connect to this new db server. when i do a select like below its all fine select a,b,c from Server.DatabaseName.dbo.table But when i use the table in a join i get the error "Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation." I did check the collation set on the 2 databases and it was actually the same and had mo idea why i'm getting the error. I later found out that you could specifically tell it to use a COLLATE Just rewrite your join like this on a.name COLLATE Latin1_General_CI_AS = eaobjname Hope that helps and saves your precious time Patrick

    Read the article

  • Les tablettes seront taxées au titre de la copie privée, avec une redevance proportionnelle au volume de mémoire des appareils

    Les tablettes seront taxées au titre de la copie privée, avec une redevance proportionnelle au volume de mémoire des appareils Mise à jour du 20.12.2010 par Katleen Si pour l'instant, la redevance télévisuelle ne s'étendra pas aux tablettes (voir news précédente), ces dernières seront en revanche bel et bien ponctionnées par un nouvel impôt : la taxe sur la copie privée. C'est la Commission copie privée qui vient de voter cette mesure, qui sera appliquée courant janvier ou février 2011, après sa publication au Journal Officiel. La redevance pour copie privée sera proportionnelle au capacités de stockage de l'appareil, en allant de 0.09 euros (128 Mo) à 12 euros (40 à 64 Go), en passant...

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >