Search Results

Search found 1492 results on 60 pages for 'tim mahy'.

Page 19/60 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • ArchBeat Link-o-Rama for August 2, 2013

    - by OTN ArchBeat
    Podcast: Data Warehousing and Oracle Data Integrator - Part 2 Part to of the discussion about Data Warehousing and Oracle Data Integrator focuses on a discussion of how data warehousing is changing and the forces driving that change. Panelists for this discussion are Uli Bethke, Oracle ACE Director Cameron Lackpour, Oracle ACE Director (and guest producer) Gurcan Orhan, and Michael Rainey. Case Management In-Depth: Cases & Case Activities Part 1 – Acivity Scope | Mark Foster FMW solution architect Mark Foster kicks off a new series with a look at the decisions made on the scope of BPM process case activities. Video: Quick Intro to WebLogic Maven Plugin 12.1.2 | Mark Nelson This YouTube video by FMW solution architect Mark Nelson offers a quick introduction to the basics of installing and using the new Oracle WebLogic 12.1.2 Maven Plugin. Running the Managed Coherence Servers Example in WebLogic Server 12c | Tim Middleton FMW solution architect Tim Middleton shares the technical details on the new Managed Coherence Servers feature and outlines how you can run the sample application available with a WebLogic Server 12.1.2 install. What’s wrong with how we develop and deliver SOA Applications today? | Mark Nelson "When we arrive at the go-live day, we have a lot of fear and uncertainty," says solution architect Mark Nelson of the typical SOA practice. "We have no idea if the system is going to work in production. We have never tested it under a production-like load, and we have not really tested it for performance, longevity, etc." OTN Latin America Tour 2013 | Kai Yu Oracle ACE Director Kai Yu shares the session abstracts from his participation in the 2013 Oracle Technology Network Latin America conference tour, which made its way through OUG conferences in Ecuador, Guatemala, Panama, and Costa Rica. Webcast: Latest Security Innovations in Oracle Database 12c Oracle Database 12c includes more new security capabilities than any other release in Oracle history! In this webcast Roxana Bradescu (Director, Oracle Database Security Product Management) will discuss these capabilities and answer your questions. (Registration required.) Thought for the Day "The main goal in life career-wise should always be to try to get paid to simply be yourself." — Kevin Smith (Born August 2, 1970) Source: brainyquote.com

    Read the article

  • Nordics OTN ACE Tour 2013 - Recap

    - by Mike Dietrich
    The Nordics OTN ACE Tour 2013 with stops in Stockholm, Ballerup/Copenhagen and Oslo is over. A very intense week with plenty of excellent presentations from Lonneke Dikmans, Sten Vesterli, Tim Hall and others. I'm always impressed how much those people know and how good they present. It's such a great learning experience. And there's always some time to talk about weired things apart from the Oracle cosmos. So thanks a lot, folks - it was a pleasure to travel with you. And many many thanks also to the people from ORCAN, DOUG and OUGN. Everything worked out so well. And thanks for the great gifts. the dinners, everything!!! Of course a special thanks to all the people who went to my presentations. Hope you've enjoyed it - and sorry for any overtiming But as Tim said yesterday in the Shuttle Bus back to the airport: "45 min slots don't work out at all" The final slide set about "Different Ways to Upgrade, Migrate and Consolidate into Oracle Database 12c including Oracle Multitenant, New Features and other stuff" can be downloaded via this link. Hope to see you all again soon - and let me know once you have successfully upgraded to Oracle Database 12c or in case you'd like to become one of our Upgrade Reference Customers. Cheers - Mike PS: One thing I couldn't really understand - why is that thing below not labeled simply GRAPE JUICE??? And who's honestly drinking that?

    Read the article

  • javax.validation.ConstraintViolationException: validation failed for classes during update time for groups

    - by Tim
    Hello all! I have a Java / Spring MVC 3 application, using Hibernate and a MySQL database. In my controller, I have this source code: Set<ConstraintViolation<Person>> failures = validator.validate(p); if (failures.isEmpty()) { Project project = this.projectService.findProjectById(projectid); Person newPerson = this.personService.addPerson(p); Set<Person> persons = this.personService.getAllPersonsByProjectId(projectid); persons.add(newPerson); project.setPersons(persons); Set<ConstraintViolation<Project>> failures1 = validator.validate(project); if (!failures1.isEmpty()) { System.out.println("ERROR"); } else { System.out.println("NO ERROR"); } this.projectService.updateProject(project); return Collections.singletonMap("person", newPerson); } Project and Person are a many-to-many relation annotated with @manytomany and Project is the mapping owner. The new Person is added, but on the line with this.projectService.updateProject(project); I get an error. What it does it this in a Dao Hibernate implementation: public void updateProject(Project p) { SessionFactory sessionFactory = HibernateUtil.getSessionFactory(); Session sess = sessionFactory.getCurrentSession(); Transaction tx = sess.beginTransaction(); sess.update(p); tx.commit(); } It failed on the line tx.commit();. My check with if (!failures1.isEmpty()) { tell me that there are nor errors in my project. So what's wrong here? And why there is a validation of my project? I did not call a validation method... so why is there a org.hibernate.cfg.beanvalidation.BeanValidationEventListener.validate()? I hope, someone can help me how to fix this! Best Regards, Tim. Here the full error stack trace: 13.01.2011 00:06:36 org.apache.catalina.core.ApplicationDispatcher invoke SERVE: Servlet.service() for servlet project3 threw exception javax.validation.ConstraintViolationException: validation failed for classes [com.mydomain.myproject.domain.Person] during update time for groups [javax.validation.groups.Default, ] at org.hibernate.cfg.beanvalidation.BeanValidationEventListener.validate(BeanValidationEventListener.java:155) at org.hibernate.cfg.beanvalidation.BeanValidationEventListener.onPreUpdate(BeanValidationEventListener.java:102) at org.hibernate.action.EntityUpdateAction.preUpdate(EntityUpdateAction.java:235) at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:185) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133) at com.mydomain.myproject.dao.impl.ProjectDaoImplHibernate.updateProject(ProjectDaoImplHibernate.java:44) at com.mydomain.myproject.service.impl.ProjectServiceImpl.updateProject(ProjectServiceImpl.java:39) at com.mydomain.myproject.controller.ProjectPersonController.addPerson(ProjectPersonController.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:176) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:426) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:414) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:790) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:560) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at org.tuckey.web.filters.urlrewrite.NormalRewrittenUrl.doRewrite(NormalRewrittenUrl.java:195) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:159) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:141) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:90) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:417) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) 13.01.2011 00:06:36 org.apache.catalina.core.StandardWrapperValve invoke SERVE: Servlet.service() for servlet default threw exception javax.validation.ConstraintViolationException: validation failed for classes [com.mydomain.myproject.domain.Person] during update time for groups [javax.validation.groups.Default, ] at org.hibernate.cfg.beanvalidation.BeanValidationEventListener.validate(BeanValidationEventListener.java:155) at org.hibernate.cfg.beanvalidation.BeanValidationEventListener.onPreUpdate(BeanValidationEventListener.java:102) at org.hibernate.action.EntityUpdateAction.preUpdate(EntityUpdateAction.java:235) at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:185) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133) at com.mydomain.myproject.dao.impl.ProjectDaoImplHibernate.updateProject(ProjectDaoImplHibernate.java:44) at com.mydomain.myproject.service.impl.ProjectServiceImpl.updateProject(ProjectServiceImpl.java:39) at com.mydomain.myproject.controller.ProjectPersonController.addPerson(ProjectPersonController.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:176) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:426) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:414) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:790) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:560) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at org.tuckey.web.filters.urlrewrite.NormalRewrittenUrl.doRewrite(NormalRewrittenUrl.java:195) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:159) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:141) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:90) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:417) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) UPDATE Before updating the Project where the error occurs, I add a person which have this annotated: @NotNull @Size(min = 1, max = 255) @Pattern(regexp="(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21\\x23-\\x5b\\x5d-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])*\")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x21-\\x5a\\x53-\\x7f]|\\\\[\\x01-\\x09\\x0b\\x0c\\x0e-\\x7f])+)\\])", message="{my.email.error.message}") private String email; Without the @Pattern no error... So, what's wrong here? UPDATE-2: I use Hibernate 3.6.0.Final and I have these in my Maven pom.xml: <!-- JSR 303 with Hibernate Validator --> <dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>1.0.0.GA</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>4.1.0.Final</version> </dependency>

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • Rebuilding array on 3ware 9690SA-8I

    - by Tim Jones
    I have a RAID10 (8x1TB) array on a 3ware 9690 card running on an Ubuntu 1110 server. There was a kernel update so I scheduled a reboot after which the array was inaccessible. I checked the status a drive has died in the array, but the controller has thrown the entire array into an 'inoperable' state instead of simply degraded (what's the point of the RAID now ;-). After taking out the 'dead' drive I run a quick test to find it completely functional without a bad sector to be found. I try to put the drive back in but the array still marks the disk as degraded (remembering serial number or something??) and the entire array as inoperable... So I swap it out for a known working drive (not the same capacity but higher - should still work) and initiate a rebuild with the the new drive as a replacement. This fails instantly with the error "(0x0B:0x0033): Unit busy : Failed to start Rebuild on Unit 0". The unit shouldn't be busy as it is not mounted (the card itself is listed with lshw but the array it provides is not). I'm pretty much at an impasse now, I don't understand how I can have a single drive failure on a RAID10 that makes the entire array inaccessible, degraded I could understand but inaccessible?? I don't think the controller as prior to the reboot it was completely functional.

    Read the article

  • Query total page count via SNMP HP Laserjet

    - by Tim
    I was asked to get hold of the total pages counts for the 100+ printers we have at work. All of them are HP Laser or Business Jets of some description and the vast majority are connected via some form of HP JetDirect network card/switch. After many hours of typing in IP addresses and copying and pasting the relevant figure in to Excel I have now been asked to do this on a weekly basis. This led me to think there must be an easier way, as an IT professional I can surely work out some time saving method to solve this issue. Suffice it to say I do not feel very professional now after a day or so of trying to make SNMP work for me! From what I understand the first thing is to enable SNMP on the printer. Done. Next I would need something to query the SNMP bit. I decided to go open source and free and someone here recommended net-snmp as a decent tool (I would like to have just added the printers as nodes in SolarWinds but we are somewhat tight on licences apparently). Next I need the name of the MIB. For this I believe the HP-LASERJET-COMMON-MIB has the correct information in it. Downloaded this and added to net-snmp. Now I need the OID which I believe after much scouring is printed-media-simplex-count (we have no duplex printers, that we are interested in at least). Running the following command yields the following demoralising output: snmpget -v 2c -c public 10.168.5.1 HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1 (the OID was derived from running: snmptranslate -IR -On printed-media-simplex-count Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt .1.3.6.1.2.1.1.16.1.1.1 ) Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1: Am I barking up the wrong tree completely with this? My aim was to script it all to output to a file for all the IP addresses of the printers and then plonk that in Excel for my lords and masters to digest at their leisure. I have a feeling I am using either the wrong MIB or the wrong OID from said MIB (or both). Does anyone have any pointers on this for me? Or should I give up and go back to navigationg each printers web page individually (hoping not).

    Read the article

  • Duplicity Full Backup Lifetime and Efficiency

    - by Tim Lytle
    I'm trying to work up a backup strategy for some clients, and am leaning towards duplicity for remote backup (already use rdiff-backup for internal/on location backups). Is it reasonable to want a full backup every so often? Since duplicity increments forward, each incremental backup is relying on the previous increment, and all are relying heavily on the last full backup. Should that become corrupt, bad things happen. A related question: Does Duplicity test the incremental backups for consistency? Assuming I do want a full backup every so often, how efficiently does duplicity create that full backup? Can/does it check file signatures and copy unchanged data from previous full backups/increments? Basically creating a new 'full' archive transferring new/changed data and merging existing unchanged data? Right now my concern is that running a full backup is needed, but the consistent large bandwidth use of full backups will make this unreasonable for some clients.

    Read the article

  • DNS slow after losing DNS Server

    - by Tim
    We have set up a small Windows Server 2008 R2 network with a domain controller which is also acting as the DNS server for the network (we opted to install DNS when setting up the domain). This network isn't connected to the Internet in any way, so all machines have been configured to use the IP address of the domain controller as their primary DNS and no secondary DNS server has been configured. If we shut down or unplug the network cable from the domain controller, DNS lookups become quite slow and the performance of the network suffers. For example, running a ping command using a hostname takes around 5-6 seconds to resolve the name. I presume this is because it is looking for the DNS, then falling back to some other method of resolving the names as the DNS server is now gone. All the machines have static IP addresses so we are considering just putting all entries in the HOSTS file of each machine. However, it would be nice to have a centralised DNS in case we one day change the IP of one of the machines. Is there a better way to speed this up?

    Read the article

  • Nginx, memcached and cakephp: memcached module always misses cache

    - by Tim
    I've got a simple nginx configuration; server{ servername localhost; root /var/www/webroot; location / { set $memcached_key $uri; index index.php index.html; try_files $uri $uri/ @cache; } location @cache { memcached_pass localhost:11211; default_type text/html; error_page 404 @fallback; } location @fallback{ try_files $uri $uri/ /index.php?url=$uri&$args; } location ~ \.php$ { fastcgi_param MEM_KEY $memcached_key; include /etc/nginx/fastcgi.conf; fastcgi_index index.php; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php5-fpm.sock; } } I've got a CakePHP helper that saves the view into memcached using the MEM_KEY parameter. I have tested it and it's working, however, nginx is always going to the @fallback direction. How can I go about troubleshooting this behavior? Would could the problem be?

    Read the article

  • Unable to install Windows Installer 4.5 on Windows Server 2008

    - by Tim Trout
    One of the prerequisites for installing SQL Server 2008 Express R2 is Windows Installer 4.5. I have a couple of WS2008 machines I'm prepping, so I downloaded the appropriate version of the file (Windows6.0-KB942288-v2-x86.msu) to our file server and tried isntalling it on both machines. On both machines I get the cryptic 0x80070003 error: "the system cannot find the file specified", but it does not show which file it cannot find. I don't get this when I try to install the Windows XP version on one of my XP machines. Any clues as to what I might be missing or doing wrong? One Technet help forum suggested I try installing the "System Update Readiness Tool", but this installer also fails for the same error code.

    Read the article

  • Is the Intel Core i5 mobile processor better than the Intel Core i7 mobile processor?

    - by Tim Barnaski
    I'm shopping for a new laptop from Dell (going to install Ubuntu on it) and I'm currently trying to sort out which processor I want. Based purely on the clock speed, it would seem that the core i5 is better than the core i7. The Intel® Core™ i5-430M has a 2.26GHz clock speed, while the Intel® Core™ i7-720QM Quad Core Processor only has a 1.6GHz clock speed. Would this not indicate that the i5 is faster than the i7?

    Read the article

  • pfSense Firewall or Linsys/Cisco router for small offices

    - by Tim Meers
    I'm about to start switching some networks around for multiple small offices. Each office has about 10 to 15 users and 10 to 15 computers. Each office has a spread of generic routers and access points. The routers vary from being used as routers, to just being an access point for wireless. Nothing formal has really ever beem implemented for each of the 10 offices. What I'm wanting is to set up a pfSense box for each office to configure things like: traffic shaping (for VoIP QOS) URL Filtering DHCP static routing multiple VLANs I'll then use some of the existing hardware for wireless. Maybe even integrate the wireless right into the firewall depending on the office layout. So my question, would this be better to do a full blown firewall box, or but a new business class or high end consumer class Linksys router to do the URL filtering, QOS and DHPC? Each option could allow for remote access and VPN for remote maintnance and each would only cost a nominal about of money for something decent, i.e. under $250.

    Read the article

  • Google Talk Video Chat with Gmail and 3rd Party App

    - by Tim Lytle
    Google Talk video chat only works in Gmail (because that makes sense, right?); however, other IM apps (I'm using Empathy specifically) support video chat over Google Talk. But can you use video chat outside your 'client'? Can someone on Gmail video chat with someone on Empathy? I'm having problems doing that, and suspect it may because of an older (but certainly not that old) version of Empathy. But before I start troubleshooting I just wanted to make sure ti's possible. Because I believe at one time, it wasn't - correct?

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • GConf error and gnome does not load properly in RHEL 5.3

    - by Tim
    Hello, I am using Red Hat Enterprise Linux 5.3 . I created a user oracle on the system, using the following command useradd -g oinstall -G dba,oper -d /home/oracle oracle Now, when i try to login as the user oracle, GNOME does not load properly and i get popup box error message like the following GConf error:Failed to contact configuration server;some possible causes are that you need to enable TCP/IP for ORBit,or your have NFS locks due to a system crash.(Details-/:IOR file'/tmp/gcofd-cheetahman/tock/ior' not opened successfully,no gconfd located:Permission denied 2: IOR file /tmp/gconfd-cheetahman/lock/ior not opened succesfully no gconfd located: Permission denied) Any way to fix this ? Thank You

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • MOSS 2007 SP2 - Error while provisioning SSP.

    - by Tim
    This is a brand new installation I am trying to provision the first SSP for MOSS and I keep getting the following error: (Provisioning failed: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)) The error also keeps appearing in the Application Event Log as event ID 7888. Google searches tell me its a connection between the Sharepoint server and the SQL server however, this is a production SQL Server which has several databases on it that do not experience any problems. These databases also include the Central Admin and Web Application databases for Sharepoint which are all working fine. Any guidance would be greatly appreciated. Thanks in advance.

    Read the article

  • Splwow64 with TS Easy Print

    - by Tim Brigham
    I have an application (Sage MIP Fund Accounting) which exports data to Excel. In this process it uses an internal print driver. Since we upgraded from 2008 to 2008 R2 this export process causes system hangs. This has been isolated down to the splwow64 executable hanging while the Excel document is building. If I kill the spwow64 executable things function properly (I just can't print it once completed). This only occurs while using printer redirection using the Remote Desktop Easy Print function - if I pull the printer redirection things work exactly as expected. I've spent the last couple hours looking at hotfixes or driver upgrades since this appears to be a problem specifically with how the Remote Desktop Easy Printer printer is functioning. Is anyone aware of a hotfix which would be applicable in this situation? I don't want to grab every hotfix for redirected printing and start throwing them out there.

    Read the article

  • How do you organise the cables in your racks?

    - by Tim
    I'm migrating my current half-size rack to a full-size rack and want to take the opportunity to reorganize and sort our spaghetti-hell of ethernet cables. What system do you use for organising your cables? Do you use any tracking software? Do you physically label the cables? What are you identifying when you label each end? Mac address? Port number? Asset number? What do you use to label them? I was looking at a hand held labeler, but the wrap around laser printer sheets might work. The Brady ID PAL seems good, but it's pricey. Ideas?

    Read the article

  • Invoking 'Hotkey Mode' in IO Gear's GCS632U KVM Switch

    - by Tim Visher
    Awhile ago I changed the switch port key to left control on my IO Gear USB Switch and I'd like to change it back to Scroll Lock. When I did this, I have some memory of believing that I had found an error in the documentation for the switch regarding how to enter Hotkey Mode. Per the instructions in the manual(PDF), I'm supposed to be able to enter Hotkey Mode by Holding Scroll Lock for 2 seconds, adding Minus on the keypad for one second, and then release Minus first and within a second release Scroll lock. Ignoring the strangeness and fragility of this process, I'm looking for confirmation that this indeed works for anyone else. I can't remember why I thought it was wrong but I clearly remember that I did (I even had a blog post that I was going to write about but it got lost in the sands of time). As an aside, I'd be interested in seeing if there is any way to force a reset for the switch without entering Hotkey Mode as that would do exactly what I'm trying to do and I wouldn't have to mess with Hotkey Mode. Thanks in advance!

    Read the article

  • What backup utilities are available for Trixbox CE?

    - by Tim Post
    I'm running Trixbox CE (v2.6.2.3) and I need to make a full system backup of all configurations, recordings, databases, etc in a manner that can be easily restored on a brand new installation of the same version of Trixbox CE. I started to write something myself using rsync, however I feel like I'm probably missing some things and needlessly copying other things. What are my options?

    Read the article

  • Modify registry for Internet Connection Sharing?

    - by Tim
    My OS is Windows XP. Quoted from How to Change the IP Range for the Internet Connection Sharing DHCP service Use Registry Editor to modify the data value of the IntranetInfo value in the following registry key: Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing\Settings\General The first number listed is the IP address of the internal IP address of the Connection Sharing host. The second number is the subnet IP address separated by a comma. Enter the first IP address of the new range followed by the subnet mask, separated by a comma. (For example, 169.254.0.1,255.255.0.0.). Modify the data value of the Start value in the following registry key: Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing\Addressing\Settings Change the value to the second address of the selected IP range. This address cannot be the same or a lower value than the IP address used for the IntranetInfo key. Modify the data value for the Stop value in the same registry key. Enter the last the IP address of the selected IP range. My registry table does not have Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing, and I don't know how to do with my registry table following the above three steps. Can someone guide me through it step by step?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >