Search Results

Search found 5414 results on 217 pages for 'otn j master'.

Page 82/217 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • SCO Unix - finding and pulling data

    - by lxlxlxl
    We recently acquired a company that was running a legacy application on a SCO Unix box. I can tell that the app is running off of http, but its reporting and data export functions are limited and I'd like to find a master db or data source on the box that maybe I can migrate into something more useful. Any ideas how I could possibly find that? Would there be something in apache or startup scripts that might point to a master db? do Unix applications have standard data source locations? I'm not sure if I'm wording this right so apologies in advance.

    Read the article

  • kloxo setup error

    - by ron
    Hi, i've just purchased vps from 2host, its unmanaged. the support told me to install kloxo. However i got the ff errors in step 1: --begin-- root@vpshostingtips:~# wget http:// download.lxlabs . com/download/kloxo/production/kloxo-install-master.sh --2010-05-06 04:17:04-- http:// download.lxlabs . com/download/kloxo/production/kloxo-inst all-master.sh Resolving download.lxlabs.com... failed: Temporary failure in name resolution. wget: unable to resolve host address `download.lxlabs.com' root@vpshostingtips:~# --end-- Note: i split the hyperlinks intentionally to post here can somebody tell me whats the reason for error? sorry so new to vps.

    Read the article

  • Server 2003 Functional Domain DFS Replication Problem (Files being moved to conflicted folder for no reason)

    - by Az
    We have 2 Windows 2003 servers configured with a DFS namespace and we are running into problems with the redirected profiles we have setup. Basically, one server is the FSMO master for all roles, and we have another DC that is the DFS namespace primary server. We have profile redirection setup using the \dfsnamespace\userprofile formula. The FSMO master DC locks up occasionally (don't ask :), and when it does, and we bring it back up... All of the user profiles hosted on the DFS namespace get overwritten when a user logs in. The current profile gets moved to the conflicting and deleted items folder. This strikes me as really odd considering the whole point of using DFS was to provide some redundancy in case one server went down. Can anyone help? Thanks in advance! -Nate

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • Mirroring MySQL server with diffrent configuration

    - by HTF
    I have to migrate MySQL server to a different data centre so I would like to create another MySQL slave server in new DC and then promote it to a master later on. I previously used LVM snapshots and Percona Xtrabackup for this purpose but this time I've optimized MySQL configuration file that prevents me from using these methods. Old server (backup): innodb_log_file_size = 256M innodb_log_files_in_group = 3 New server (restore): innodb_log_file_size = 512M innodb_log_files_in_group = 2 The Xtrabackup script and LVM snapshots copy the whole directory structure so the MySQL server won't start because there is a different size for InnoDB logs. Is there any solution to avoid a downtime in this case? I can't use mysqldumps as there is around 8000 databases so I would have to take the server down for a couple of hours. I was also thinking to use the old settings with Xtrabackup and then change it once the new server is promoted to a master - less downtime but I'm not sure if this will work? Thank you Regards

    Read the article

  • Setting up Dependencies for Nagios

    - by hfranco
    I'm trying to setup dependencies for a router and several servers. What I want to do is setup the router as the Master Host so if the router fails all other services on the servers won't alert. Unfortunately this is easier said than done. Is there an easy way to setup service dependencies for all services on a server for a Master Host (or my router)? Nagios has some documentation but it will be extremely time consuming to add a single service dependency definition for every service. http://nagios.sourceforge.net/docs/3_0/objectdefinitions.html#servicedependency

    Read the article

  • Update server version for postgres 9.1.2

    - by Nai
    I'm trying to run a postgis sql script and I'm running into the following error. Am I correct to say that updating my server version will fix it? If so, how can I go about updating it? I'm on Mac OSX Lion and installed Postgres via brew. Apparently I have an older version installed which is 9.1.2 but installing postgis installed postgres 9.2.1 on to my system. How can I point my postgres server to the new one? nai@nyc /usr/local/share/postgis (git::master) $ psql -d template_postgis -f postgis.sql SET BEGIN psql:postgis.sql:49: ERROR: incompatible library "/usr/local/Cellar/postgresql/9.2.1/lib/postgis-2.0.so": version mismatch DETAIL: Server is version 9.1, library is version 9.2. nai@nyc /usr/local/share/postgis (git::master) $ psql psql (9.2.1, server 9.1.2) WARNING: psql version 9.2, server version 9.1. Some psql features might not work.

    Read the article

  • 389 DS Achitecture for Multiple Sites

    - by Kyle Flavin
    I'm looking to deploy 389 Directory in my environment to replace an existing iPlanet installation. I would be using it primarily to store user account data for authentication purposes. I have two physically separate data centers that I would like to share the same directory tree. My initial thinking is to setup 389 DS as follows: -A Master/Consumer in DataCenter A -A Master/Consumer in DataCenter B -Replication agreement between both masters, to mirror the directory tree in both environments. Does this sound like a reasonable approach? Is there a better way to do it? (ie: four masters?) Is there documentation for best practices when setting up 389 DS in situations such as this? Thanks.

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • "remote file operation failed" on Hudson

    - by Aveen
    I am running a Windows slave for Husdon 1.337 (Linux master). When running a project on the Windows node, it fails with the following message: Building remotely on winTestSlave Checking out a fresh workspace because there's no workspace at C:\hudson\***\ejb remote file operation failed It did work yesterday and I have not upgraded Hudson or changed its configurations (or the slave's configurations) in any way. I establish the connection between the slave and the master by running the following command on a cygwim prompt on the slave: java -jar slave.jar -jnlpUrl http://myserver/computer/winTestSlave/slave-agent.jnlp I saw the issue http://issues.hudson-ci.org/browse/HUDSON-5374 and did as instructed in the work-around but that did not work. I also tried with a newer version of slave.jar (version 1.356) but that did not work either. Does anyone please have any idea of how to fix this? I really cannot find more information anywhere else!

    Read the article

  • OS X server 10.6 - how to restore default groups?

    - by Zoran Simic
    I've set up my OS X server as an open directory master first, then (experimenting), I've changed it to standalone server, then set it back as an open directory master again. Now, all the default groups I saw before are gone (Domain Administrators, Domain users etc). Do you know how to restore these groups? Note that the groups are gone only from the Workgroup Manager UI. They do seem to be still there otherwise. id -G gives the usual list of groups. If I create an account and makes its primary group 'staff', Workgroup Manager shows all the inherited groups properly (but not on the main list). If I create an account and associate it to a new group I just created, then the account has no inherited groups...

    Read the article

  • Session persistence between multiple Rails / Unicorn servers with Redis as session_store on AWS

    - by d_ethier
    I've got 2 nginx EC2 instances pointing to 2 Unicorn EC2 instances in a round robin load balanced configuration. The two nginx instances are being the Elastic Load Balancer. Both Unicorn instances have a Redis session_store configured which is in a master/slave configuration with an Elastic IP attached to the master. I've tried configuring the session stickiness on the load balancer, but sessions are lost on each page refresh. I'm using the redis-store gem for the session_store configuration and redis support. Anyone have any ideas as to why this is not working?

    Read the article

  • trigger script on postfix delivery errors

    - by edovino
    I'm trying to get postfix to run a script on soft (4xx) and hard (5xx) delivery errors, but I'm not sure where to start. If I understand things correctly, I could insert (pipe-based) filters in the master.cf file, there's a whole 'milter' infrastructure available, an finally I suppose I could simply grep through the mail.info logs. So - any advice? Should I go the 'handle it via master.cf' route, and if so, what daemon should I intercept? 'bounce'? The grep-the-logs route is probably simplest, but I can't help but feel that there is a better way. Any advice appreciated!

    Read the article

  • PC cannot see Clonezilla server when using PXE Boot

    - by r2b2
    Hi I have about 15 workstations, all have the same hardware specs so I figured clonezilla is perfect for cloning them. So I followed everything this guide says :cloning groups of school computers That guide above makes it all look easy but when I tried setting up Clonezilla Server Edition on my laptop (Kubuntu 10.10) and configured the master computer for PXE boot, the master computer could not see the Clonezilla server. It just shows 'DHCP...' which seems it trying to grab an IP address but to no avail. Please help. Thanks!

    Read the article

  • JPA2 adding referential contraint to table complicates criteria query with lazy fetch, need advice

    - by Quaternion
    Following is a lot of writing for what I feel is a pretty simple issue. Root of issue is my ignorance, not looking so much for code but advice. Table: Ininvhst (Inventory-schema inventory history) column ihtran (inventory history transfer code) using an old entity mapping I have: @Basic(optional = false) @Column(name = "IHTRAN") private String ihtran; ihtran is really a foreign key to table Intrnmst ("Inventory Transfer Master" which contains a list of "transfer codes"). This was not expressed in the database so placed a referential constraint on Ininvhst re-generating JPA2 entity classes produced: @JoinColumn(name = "IHTRAN", referencedColumnName = "TMCODE", nullable = false) @ManyToOne(optional = false) private Intrnmst intrnmst; Now previously I was using JPA2 to select the records/(Ininvhst entities) from the Ininvhst table where "ihtran" was one of a set of values. I used in.value() to do this... here is a snippet: cq = cb.createQuery(Ininvhst.class); ... In in = cb.in(transactionType); //Get in expression for transacton types for (String s : transactionTypes) { //has a value in = in.value(s);//check if the strings we are looking for exist in the transfer master } predicateList.add(in); My issue is that the Ininvhst used to contain a string called ihtran but now it contains Ininvhst... So I now need a path expression: this.predicateList = new ArrayList<Predicate>(); if (transactionTypes != null && transactionTypes.size() > 0) { //list of strings has some values Path<Intrnmst> intrnmst = root.get(Ininvhst_.intrnmst); //get transfermaster from Ininvhst Path<String> transactionType = intrnmst.get(Intrnmst_.tmcode); //get transaction types from transfer master In<String> in = cb.in(transactionType); //Get in expression for transacton types for (String s : transactionTypes) { //has a value in = in.value(s);//check if the strings we are looking for exist in the transfer master } predicateList.add(in); } Can I add ihtran back into the entity along with a join column that is both references "IHTRAN"? Or should I use a projection to somehow return Ininvhst along with the ihtran string which is now part of the Intrnmst entity. Or should I use a projection to return Ininvhst and somehow limit Intrnmst just just the ihtran string. Further information: I am using the resulting list of selected Ininvhst objects in a web application, the class which contains the list of Ininvhst objects is transformed into a json object. There are probably quite a few serialization methods that would navigate the object graph the problem is that my current fetch strategy is lazy so it hits the join entity (Intrnmst intrnmst) and there is no Entity Manager available at that point. At this point I have prevented the object from serializing the join column but now I am missing a critical piece of data. I think I've said too much but not knowing enough I don't know what you JPA experts need. What I would like is my original object to have both a string object and be able to join on the same column (ihtran) and have it as a string too, but if this isn't possible or advisable I want to hear what I should do and why. Pseudo code/English is more than fine.

    Read the article

  • mysql cluster virtual ip

    - by user225995
    I am new in mysql cluster and mysql cluster and versions are not my choice. I setup four machines. Two of them manager , Two of them data cluster (ndb and mysqld). And i integrate with mysql utilities master/slave configuration. Everything working fine. Mysql version 5.6.17, ndb 7.3.5 , servers ubuntu 14.04. There will be no much transactions. The only important thing is HA. Everythings must be double. My problem is virtual ip. Since I have only one farm which have master slave configuration, how can i do it without proxy? If I must use proxy which proxy is better?

    Read the article

  • Clonezilla , NLite and PCs with different hardware specs

    - by r2b2
    I used Clonezilla to create and restore images from a master computer to other workstations with the same specs. But the problem now is there are new computers whose hardware specs are different than the ones I maintained (mostly the videocard is different). Is it possible to create a customize Windows XP installer using Nlite and integrate all potential videocard and motherboard drivers ? If I then used this NLite ISO to install to a master computer which i will later clone and restore the image to other workstations, will windows xp still pick up the correct driver set? During XP installation, does the installer transfers all the drivers to the computer's harddisk? Thanks! Ryan

    Read the article

  • Data capture from other sheet into Summary sheet

    - by Hemant
    an Excel workbook which has Summary sheet, Pending and Master Sheet. My requirement is below and try to develop a Macro or VB logic for excel • I want to control this workbook from Summary sheet. o Generate Fault Summary – ? I have set logic but if doesn’t give warning if sheet name is exists , so need to add this logic . ? When we press the Fault Report Summary command button then it copy the master sheet with cell “A6” Name and will hide the Master sheet. Again when you select the another Month name then it will generate the sheet for that month name. o Generate Toll System Uptime ? When I select the sheet name and “Week” then Press the “Enter “Command button then it should get the result from that sheet number . Each sheet number has Month detail in B2 Cell. ? To calculate the Uptime formula for Week wise is • Week-01 = (1680-SUMIFS(L5:L23,B5:B23,"="&B2,B5:B23,"<="&(B2+6)))/1680 • Week-02 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+7),B5:B23,"<="&(B2+13)))/1680 • Week-03 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+14),B5:B23,"<="&(B2+20)))/1680 • Week-04 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+21),B5:B23,"<="&(B2+27)))/1680 • Month =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2),B5:B23,"<="&(DATE(YEAR(B2),1+MONTH(B2),1)-1)))/1680 ? Result should reflect in Summary sheet at B18 cell . o Pending Fault Report Summary ? When segregate the report on its status like which one is open or Close . It is open then it is Pending Fault Report and when it is Close status it means it is closed. ? If any fault which has OPEN status in all sheets(Jan-13,Feb-13,Mar-13….etc) then it should be come as well as in Pending Sheet which ascending date order. ? When it’s status is changed then it should be moved in that month sheet or nearby fault created date. It status is close then it should not be available in pending sheet as it’s status is Closed. ? Each fault has Reported date and we monitor all fault according reported date. ? When we press the Update Fault Report Summary command button then it should update as above logic. ? Some time we export the Pending fault report , so date calendar should be present in Start and End date to Choose the date. When we press the Export command line then it should export the Pending fault report and able to save in Excel,PDF.

    Read the article

  • Is it possible to reference a linkbotton outside an update panel as the update trigger?

    - by Selase
    I have a page based on a master page and as such i can only see the content place holders i used in the master page showing up in the aspx pages based on the master page. the source code shown below: <%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="CaseAdmin.aspx.cs" Inherits="Prototype4.CaseAdmin" %> <%@PreviousPageType VirtualPath="~/Account/Login.aspx"%> <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> </asp:Content> <asp:Content ID="CaseRightNews" ContentPlaceHolderID="RightNewsItem" runat="server"> </asp:Content> <asp:Content ID="CaseLeftNav" ContentPlaceHolderID="LeftNavigation" runat="server"> <div style="margin-top:20px; margin-bottom:20px;"> <p class="actionButton"> <asp:LinkButton ID="OpenCaseLinkButton" runat="server" onclick="OpenCaseLinkButton_Click">Open Case</asp:LinkButton> </p> <p class="actionButton"><asp:LinkButton ID="RegisterExhibitLinkButton" runat="server" onclick="RegisterExhibitLinkButton_Click">Register Exhibit</asp:LinkButton> </p> </div> </asp:Content> <asp:Content ID="CaseMainContnt" ContentPlaceHolderID="MainContent" runat="server"> <asp:ScriptManager ID="ScriptManager" runat="server" /> <asp:UpdatePanel ID="CaseMainCntntUpdatePanel" UpdateMode="Conditional" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="" eventname="Click"/> </Triggers> <ContentTemplate> <%--Some text here to inform user to click on the open case botton to display open case form--%> </ContentTemplate> </asp:UpdatePanel> <asp:UpdatePanel runat="server" id="UpdatePanel1" updatemode="Conditional"> <Triggers> <asp:AsyncPostBackTrigger ControlID="" eventname="Click"/> </Triggers> <ContentTemplate> <%--some text here to inform users to click on the add exhibit botton to display add exhibit form--%> </ContentTemplate> </asp:UpdatePanel> </asp:Content> the section of the entire page i wish to change upon update is the (this is the main content of the page). for this reason i placed the updatepanel inside the content place holder since it cant be sitting outside and not wrapped in a content place holder. However, the buttons that i wish to apply the trigger that fires the update to, are in another content place holder(). How can i possibly get those buttons to act as the trigger while changing only what appears in the main content area. Plus, i tried getting the updatepanel to work just so i could see if it does the update well but it turned out really bad. i added some linkbottons in the content template area and used them as the triggers for testing reasons. i tested and the changes took over the entire page in contrast to just appearing in the content area. I actually just wanted to load a form that is created in another asp. page into the main content area... I seriously need help with this... Every little help, detail and information is dearly appreciated... thanks so much in advance

    Read the article

  • Creating ip alias on bonded interface ie. bond0:1

    - by bobothechimp
    System: HP Proliant DL360 G5 running CentOS 5.4 Bonded interface is working fine for a long time. I just went to add an alias the way I always have on a regular interface, and on first check it works (pinging on the local box) but it is not accessable from outside (iptables is turned off). In addition with this setup the normal network response started to decline, hanging for around a minute before I could get a prompt on login. Here are my config files: [root network-scripts]# cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-bond0 DEVICE=bond0 BONDING_OPTS="mode=1 miimon=100" BOOTPROTO=none ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.11 USERCTL=no [root network-scripts]# cat ifcfg-bond0:1 DEVICE=bond0:1 BOOTPROTO=static ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.12 USERCTL=no any thoughts?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • Samba / smbd on Centos 6.5

    - by Satalink
    I've installed Samba4 and have the smb.conf file as follows: [global] workgroup = WORKGROUP server string = Samba Server realm = REXIALO.COM netbios name = REXIALO.COM security = user map to guest = Bad Password bind interfaces only = no interfaces = lo venet0 log file = /var/log/samba/samba.log max log size = 1000 [webroot] path = /usr/local/apache/htdocs comment = Example.com webroot directory read only = No I can connect from the same server with smbclient. Localhost: # smbclient -L localhost -U root Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- -------Enter root's password: network: # smbclient -L rexialo.com -U Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- ------- The problem is when I try to map to the smb webroot from Windows 7, it asks for user/pass but just times out and then prompts for credentials. The samba.log file does not show any activity other than the startup of the smbd process. Any help would be appreciated.

    Read the article

  • Cross join problem query

    - by user66121
    i have following table structure HUB_DETAILS (Master) Branch_ID Branch_Name VTRCheckList (Master) CLid CLName VTRCheckListDetails (Detail) CLid Branch_ID VTRValue vtrRespDate Actually when i run the following query it does comes with all the Checklist names alongwith all branch names but shows the value in every branch infact only 1 branch has data in the given date criteria. it should show 0 if there is no data in checklist of the respective branch. SELECT VTRCheckList.CLName, Hub_Details.BranchName, sum(cast(VTRCheckListDetails.VtrValue as int)) as 'Total' FROM VTRCheckListDetails INNER JOIN VTRCheckList ON VTRCheckListDetails.CLid = VTRCheckList.CLid CROSS JOIN Hub_Details where Convert(date,VTRCheckListDetails.vtrRespDate, 105) >= convert(date,'01-01-2011',105) and Convert(date, VTRCheckListDetails.vtrRespDate, 105) <= convert(date,'30-01-2011',105) GROUP BY VTRCheckList.CLName, Hub_Details.BranchName

    Read the article

  • How do you create a SQL query in Excel 2007 with a dynamic date range?

    - by Jordan
    I am trying to create a reporting spreadsheet that can print reports for a given time period. The query below works, but when I try to use a "?" parameter in place of the date, I get an error after selecting a cell containing my date. If I use single quotes ('?') I get a conversion from string to date/time failure, if I don't (?) I get a syntax error near @p1. Eventually I will need either a start and end date or a formula adding a month or shift to the starting date/time to filter the data down to important information. The query was built in Microsoft Query. SELECT FloatTable.DateAndTime, TagTable.TagName FROM master.dbo.FloatTable FloatTable, master.dbo.TagTable TagTable WHERE FloatTable.TagIndex = TagTable.TagIndex AND ((FloatTable.DateAndTime={ts '2012-06-01 00:00:00'})) Any assistance would be much appreciated. Thanks in advance.

    Read the article

  • Puppet:get real-time status of catalog evaluation and post to remote server

    - by txworking
    According to this article http://docs.puppetlabs.com/guides/puppet_internals.html There are four phases when puppet agent got a catalog from master. resource generation = relationships = evaluation = reporting Reporting As the transaction progresses, it collects logs and metrics on what it does. At the end of evaluation, it turns this information into a report, which it sends to the server (if requested). And at the end of evaluation puppet agent would generate a report and sent the report to the master. Is there a way to get real-time status of evaluation phase and post them to a remote logcollector? Glad for any suggestions.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >