Search Results

Search found 2557 results on 103 pages for 'commons exec'.

Page 80/103 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How to deal with job that stop and cannot continue unless made foreground?

    - by Vi
    Recent example: mountlo (using UML): vi@vi-notebook:~/b$ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other& [1] 32561 vi@vi-notebook:~/b$ Checking that ptrace can change system call numbers...OK Checking syscall emulation patch for ptrace...OK Checking advanced syscall emulation patch for ptrace...OK Checking PROT_EXEC mmap in /tmp...OK Checking for the skas3 patch in the host: - /proc/mm...not found - PTRACE_FAULTINFO...not found - PTRACE_LDT...not found UML running in SKAS0 mode [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ bg [1]+ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other & [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ bg [1]+ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other & [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ bg [1]+ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other & [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ fg mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other Linux version 2.6.15 (miko@dorka) (gcc version 3.3.5 (Debian 1:3.3.5-13)) #1 Mon Feb 27 13:27:52 CET 2006 (normal output) ... vi@vi-notebook:~/b$ socat - exec:'mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8\,allow_other',pty,ctty fusermount: waitpid: No child processes vi@vi-notebook:~/b$ Also happens with Gimp (when it does run it's plug-ins). Parts of Gimp started by `gimp q.jpg&' freeze and cannot continue unless "killall -CONT" or made foreground. Is it a bug? How to reliably start things in a background?

    Read the article

  • Strange Jmeter connection refuse on Tomcat

    - by Tommy
    I tried difference setting in Jmeter and Tomcat. If the Threads number in JMeter is 1~200, Then tomcat is okay. If It is 300, Then after serving few requests, tomcat starts to output errors. Here is the error show in JMeter java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.<init>(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at org.apache.jmeter.protocol.http.sampler.HTTPJavaImpl.sample(HTTPJavaImpl.java:483) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:62) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1018) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1004) at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:411) at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:297) at java.lang.Thread.run(Unknown Source) My tomcat server.xml in eclipse <!--The connectors can use a shared executor, you can define one or more named thread pools--> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="2000" minSpareThreads="250" acceptCount="2000"/> <Connector executor="tomcatThreadPool" URIEncoding="UTF-8" connectionTimeout="20000" port="8080" protocol="HTTP/1.1" redirectPort="8443" /> Any idea why this is happening ? How do i check the server.xml is correctly used? It is a JSF2 application if it helps. Thanks in advance.

    Read the article

  • org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager

    - by BilalFromParis
    when I add the code into my spring configuration file beans-hibernate.xml <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> It doesn't work and I don't know why, can someone help me please ? My Dao Class is : public class CourseDaoImpl implements CourseDao { private SessionFactory sessionFactory; public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } @Transactional public void store(Course course) { sessionFactory.getCurrentSession().saveOrUpdate(course); } @Transactional public void delete(Long courseId) { Course course = (Course)sessionFactory.getCurrentSession().get(Course.class, courseId); sessionFactory.getCurrentSession().delete(course); } @Transactional(readOnly=true) public Course findById(Long courseId) { return (Course)sessionFactory.getCurrentSession().get(Course.class, courseId); } @Transactional public List<Course> findAll() { Query query = sessionFactory.getCurrentSession().createQuery("FROM Course"); return (List<Course>)query.list(); } } but : juil. 04, 2012 3:38:18 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh Infos: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@6ba8fb1b: startup date [Wed Jul 04 03:38:18 CEST 2012]; root of context hierarchy juil. 04, 2012 3:38:18 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Infos: Loading XML bean definitions from class path resource [beans-hibernate.xml] juil. 04, 2012 3:38:19 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons Infos: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5a7fed46: defining beans [org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sessionFactory,transactionManager,courseDao]; root of factory hierarchy juil. 04, 2012 3:38:19 AM org.hibernate.annotations.common.Version INFO: HCANN000001: Hibernate Commons Annotations {4.0.1.Final} juil. 04, 2012 3:38:19 AM org.hibernate.Version logVersion INFO: HHH000412: Hibernate Core {4.1.3.Final} juil. 04, 2012 3:38:19 AM org.hibernate.cfg.Environment INFO: HHH000206: hibernate.properties not found juil. 04, 2012 3:38:19 AM org.hibernate.cfg.Environment buildBytecodeProvider INFO: HHH000021: Bytecode provider name : javassist juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000402: Using Hibernate built-in connection pool (not for production use!) juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000115: Hibernate connection pool size: 20 juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000006: Autocommit mode: false juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000401: using driver [org.hibernate.dialect.PostgreSQLDialect] at URL [jdbc:postgresql://localhost:5432/spring] juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000046: Connection properties: {user=Bilal, password=**} juil. 04, 2012 3:38:19 AM org.hibernate.dialect.Dialect INFO: HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect juil. 04, 2012 3:38:19 AM org.hibernate.engine.jdbc.internal.LobCreatorBuilder useContextualLobCreation INFO: HHH000423: Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 juil. 04, 2012 3:38:19 AM org.hibernate.engine.transaction.internal.TransactionFactoryInitiator initiateService INFO: HHH000399: Using default transaction strategy (direct JDBC transactions) juil. 04, 2012 3:38:19 AM org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory INFO: HHH000397: Using ASTQueryTranslatorFactory juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000228: Running hbm2ddl schema update juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000102: Fetching database metadata juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000396: Updating schema juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000261: Table found: public.course juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000037: Columns: [fee, id, title, end_date, begin_date] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000108: Foreign keys: [] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000126: Indexes: [course_pkey] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000232: Schema update complete juil. 04, 2012 3:38:19 AM org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons Infos: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5a7fed46: defining beans [org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sessionFactory,transactionManager,courseDao]; root of factory hierarchy juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl stop INFO: HHH000030: Cleaning up connection pool [jdbc:postgresql://localhost:5432/spring] Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager' defined in class path resource [beans-hibernate.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/hibernate/engine/SessionFactoryImplementor at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:83) at com.boutaya.bill.main.Main.main(Main.java:14) Caused by: java.lang.NoClassDefFoundError: org/hibernate/engine/SessionFactoryImplementor at org.springframework.orm.hibernate3.SessionFactoryUtils.getDataSource(SessionFactoryUtils.java:123) at org.springframework.orm.hibernate3.HibernateTransactionManager.afterPropertiesSet(HibernateTransactionManager.java:411) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) ... 12 more Caused by: java.lang.ClassNotFoundException: org.hibernate.engine.SessionFactoryImplementor at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 16 more I think the problem is when I use the Class : org.springframework.orm.hibernate3.HibernateTransactionManager ???

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • Solaris SMF to Upstart on RHEL6

    - by aaa90210
    I am planning a migration from Solaris/x86 to RHEL6. Part of this migration will be migrating services from SMF to the RHEL6 equivalent, which appears to be upstart. While init.d scripts still seem to be supported, I want to take advantage of a more sophisticated init daemon, especially for features like job supervision (restarting etc). I would like to gather some thoughts on a few points: 1) Is upstart an adequate job supervisor, i.e. does it preclude the need for stand-alone managers like daemontools/supervise? 2) Upstart scripts seem very bare-bones compared to a typical init.d script. If I was porting an init.d script to Upstart, is it OK to just "exec /etc/init.d/myjob start"? This include RHEL installed programs like httpd. 3) Does upstart do anything is regards to pid files, and what are it's expectations in regards to the forking model of the process? 4) Are there any straightforward guides to the process management aspect of Upstart...and by that I mean the conditions around controlling restarting? e.g. how many times to restart the process before it goes into a maintenance state, or to ignore errors/core dumps in child processes of the supervised process. Any other relevant ideas or guides would be appreciated. TIA

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • script calling script as other user

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root: : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for training user to be set. : su - training -c 'training_command' gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers a la Bash & 'su' script giving an error "standard in must be a tty" but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this message. I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice.

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • Bash script to run a clamscan on Ubuntu- how to use return values properly?

    - by Marius
    I'm trying to put together a simple script that will scan my home directory with clamscan and give me a warning if any viruses were found. What I have so far is: #! /usr/bin/env bash clamscan -l ~/.ClamScan/$(date +"%a%b%d") -ir /home RETVAL=$? [ $RETVAL -eq 0 ] && notify-send 'clamscan finished. No viruses found' [ $RETVAL -eq 1 ] && notify-send 'clamscan found a virus' && touch ~/Desktop/VirusFound [ $RETVAL -eq 2 ] && notify-send 'clamscan encountered errors. Check the logs' && touch ~/Desktop/ClamscanError find ~/.ClamScan/* -mtime +7 -exec rm {} \; However, I'm unsure about a couple of things: I'm always wary of using rm- as far as I can tell, the find command I've got should be deleting any log files that are more than a week old. I'm also not entirely sure how the return value testing works- I've got a manual that briefly covers bash, which says that the meaning of $? is "match one character", and I'm not entirely sure how that grabs the return value. Should I be using -eq or = for testing the return value? From what I can tell -eq tests strings and = tests numerals, but I'm not sure what the type of the return value is.

    Read the article

  • List symlinks in specific relative directories

    - by Clinton Blackmore
    I have a server that shares out user home folders over the network. Each user has a Cache folder. Sometimes a symlink is used to redirect this folder to the hard drive of whichever machine they are using (and sometimes that doesn't work and they have a broken symlink [which is a matter for another day].) I'm trying to find out which users have symlinks and which don't. Within the shared folder, to get to the Cache folder you would substitute folders like so: $GRADE/$USERNAME/Library/Caches Right now I'm searching to see which users have symlinks and which do not. I've come up with: cd /path/to/shared/home/folders sudo find . -name "Caches" -exec ls -ld {} \; and get results like this: lrwxr-xr-x@ 1 name0 ES_Students 27 Jan 18 11:05 ./CES_Grade_03/name0/Library/Caches -> /tmp/name0/Library/Caches drwx------ 11 name1 ES_Students 374 Dec 8 15:44 ./CES_Grade_03/name1/Library/Caches lrwxr-xr-x@ 1 name2 ES_Students 27 Feb 23 14:27 ./CES_Grade_03/name2/Library/Caches -> /tmp/name2/Library/Caches drwx------ 17 name3 ES_Students 578 Jan 25 11:13 ./CES_Grade_03/name3/Library/Caches drwx------ 12 name4 ES_Students 408 Mar 22 13:09 ./CES_Grade_03/name4/Library/Caches but it nags at me that there must be a better way. Yes, it is good enough, and a one-off task, but I want to know how to do it right! Surely, I should be able to do something like: cd /path/to/shared/home/folders sudo ls -ld **/**/Library/Caches I'm afraid that I don't know the proper syntax or if there is a recursive folder-replacing wildcard format in bash, and my google-fu failed me. So, how do I properly formulate the search?

    Read the article

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • Vmware Workstation downloads as a txt file?

    - by George Mauer
    I just went to the vmware website because I want to try workstation over virtualbox. I signed up for a workstation trial and clicked download on the 64bit linux version. What downloaded is a 320 megabyte txt file VMware-Workstation-Full-8.0.2-591240.x86_64.txt What gives? Is anyone familiar with this pattern of delivering software? How do I run it? Here is the beginning of that file: #!/usr/bin/env bash # # VMware Installer Launcher # # This is the executable stub to check if the VMware Installer Service # is installed and if so, launch it. If it is not installed, the # attached payload is extracted, the VMIS is installed, and the VMIS # is launched to install the bundle as normal. # Architecture this bundle was built for (x86 or x64) ARCH=x64 if [ -z "$BASH" ]; then # $- expands to the current options so things like -x get passed through if [ ! -z "$-" ]; then opts="-$-" fi # dash flips out of $opts is quoted, so don't. exec /usr/bin/env bash $opts "$0" "$@" echo "Unable to restart with bash shell" exit 1 fi set -e ETCDIR=/etc/vmware-installer OLDETCDIR="/etc/vmware" ### Offsets ### # These are offsets that are later used relative to EOF. FOOTER_SIZE=52 # This won't work with non-GNU stat. FILE_SIZE=`stat --format "%s" "$0"` offset=$(($FILE_SIZE - 4)) MAGIC_OFFSET=$offset offset=$(($offset - 4)) CHECKSUM_OFFSET=$offset offset=$(($offset - 4)) VERSION_OFFSET=$offset offset=$(($offset - 4)) PREPAYLOAD_OFFSET=$offset

    Read the article

  • How to create RPM for 32-bit arch from a 64-bit arch server?

    - by Gnanam
    Our production server is running CentOS5 64-bit arch. Because there are no RPM available currently for SQLite latest version (v3.7.3), I created RPM using rpmbuild the very first time by following the instructions given here. I was able to successfully create RPM for 64-bit (x86_64) architecture. But am not able to create RPM for 32-bit (i386) architecture. It failed with the following errors: ... ... ... + ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=i386-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --enable-threadsafe checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for x86_64-redhat-linux-gnu-gcc... no checking for gcc... gcc checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. error: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) This is the command I called: rpmbuild --target i386 -ba sqlite.spec My question is, how do I create RPM for 32-bit arch from a 64-bit arch server?

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Perl wrapper to start daemon leaves zombie when run by cron

    - by leonstr
    I've got a Perl script to start a process as a daemon. But when I call it from cron I'm left with a defunct process. I've stripped this down to a minimal script, I'm starting 'tail' as a placeholder for the daemon: use POSIX "setsid"; $SIG{CHLD} = 'IGNORE'; my $pid = fork(); exit(0) if ($pid > 0); (setsid() != -1) || die "Can't start a new session: $!"; open (STDIN, '/dev/null') or die ("Cannot read /dev/null: $!\n"); my $logout = "logger -t test"; open (STDOUT, "|$logout") or die ("Cannot pipe stdout to $logout: $!\n"); open (STDERR, "|$logout") or die ("Cannot pipe stderr to $logout: $!\n"); my $cmd = "tail -f"; exec($cmd); exit(1); I run this with cron and end up with: root 18616 18615 0 11:40 ? 00:00:00 [test.pl] <defunct> root 18617 1 0 11:40 ? 00:00:00 tail -f root 18618 18617 0 11:40 ? 00:00:00 logger -t test root 18619 18617 0 11:40 ? 00:00:00 logger -t test As far as I can tell it's the piping to logger that it doesn't like, if I send STDOUT and STDERR to /dev/null the problem doesn't occur. Am I doing something wrong or is this just not possible? (CentOS 5.8) Thanks, leonstr

    Read the article

  • BASH Wildcard Expansion

    - by Aaron Copley
    I'm not really sure how to phrase this, and maybe that's why I can't find any thing, but I want to reuse the values enumerated by a wildcard in a command. Is this possible? Scenario: $ ls /dir 1 2 3 Contents of /dir are directories 1, 2, and 3. $ cp /dir/*/file . Results in file being copied from /dir/1 /dir/2 and /dir/3 to here. What I would like to do is copy the files to a new destination name based on the wildcard expansion. $ cp /dir/*/file ???-file Would result in /dir/*/file being copied to 1-file, 2-file, and 3-file. What I can't figured out is the ??? portion to tell BASH I want to use the wildcard expanded values. Using the wildcard in the target nets a cp error: cp: target `*-file' is not a directory. Is there something else in bash that can be used here? The find command has {} to use with -exec which is similar to what I am looking for above.

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • Deployment and Ownership issues

    - by kylemac
    As an extreme newbie, I am having difficulty managing ownership and permissions on my first box. What I can't figure out is how to deploy using one user, we will call him deploy and operate my php application with www-data user. Currently as it stands, I know my server runs as www-data through this function <?php echo(exec("whoami")); ?> but I am having to chown between deploy and www-data every time I deploy. There has got to be an easier way to deploy with one user and still run as www-data. EDIT: Here is the output from ls- l on the folder in question. You will see user deploy and group www-pub, the group is from an attempt to add the two different users to a new group and chown one of them in the hopes that they both would have the permissions (newb alert) drwxrwxr-x 4 deploy www-pub 4096 Mar 7 01:41 example.com I am using capistrano for deployment under the user deploy then once its done i chown to www-data, otherwise I can't use php to manipulate files. I am also unsure how to even change which user apache is running.

    Read the article

  • Telerik is First to Announce Support for Microsoft Silverlight Analytics Framework

    Yesterday at MIX 10 conference Microsoft announced the Microsoft Silverlight Analytics Framework Beta. The Silverlight Analytics Framework (SAF) is a new open-source framework to allow designers and developers to integrate web analytics into Silverlight applications in a consistent manner. Supporting out-of-browser and offline scenarios, Microsoft built this framework in conjunction with a number of web analytics services and control vendors to support multiple analytics services simultaneously without degrading application performance. Because the SAF is enabled as a set of behaviors in Microsoft Expression Blend, designers and developers can visually instrument their designs and configure A/B testing rapidly without writing any code. Telerik is proud to be the first control vendor to support the Silverlight Analytics Framework. RadControls for Silverlight can be used with the framework out of the box. The suite offers Silverlight Analytics Framework handlers and behavior, helping developers to fine tune the values sent to the analytics providers. Because the analytics framework is using the Managed Extensibility Framework (MEF) for composition, you don't need to change the way you use the controls to benefit from the Telerik handlers. Just add a reference to the Telerik assemblies that contains the handlers. Here is the code that you need to declare to use RadTreeView: <UserControl x:Class="Telerik.SLAF.MainPage"         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"         xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"         xmlns:ga="clr-namespace:Google.WebAnalytics;assembly=Google.WebAnalytics"         xmlns:sa="clr-namespace:Microsoft.WebAnalytics.Behaviors;assembly=Microsoft.WebAnalytics.Behaviors"         xmlns:ic="clr-namespace:Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions"         xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"         xmlns:telerikNavigation="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation">        <Grid x:Name="LayoutRoot">         <i:Interaction.Behaviors>             <ga:GoogleAnalytics ProfileId="--Your GA ProfileId" Category="Demo" />         </i:Interaction.Behaviors>         <telerikNavigation:RadTreeView>             <i:Interaction.Triggers>                 <i:EventTrigger EventName="SelectionChanged">                     <sa:TrackAction />                 </i:EventTrigger>             </i:Interaction.Triggers>             <telerikNavigation:RadTreeViewItem Header="Item1">             </telerikNavigation:RadTreeViewItem>             <telerikNavigation:RadTreeViewItem Header="Item2" />             <telerikNavigation:RadTreeViewItem Header="Item3" />         </telerikNavigation:RadTreeView>     </Grid> </UserControl> Download the Telerik Microsoft Silverlight Analytics Framework Handlers and the sample project. This is our first Beta release - please drop us a line with any feedback you have or even better if you are at MIX10 - come visit us at the booth in the "Commons" hall so we can discuss it in person. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >