Search Results

Search found 18014 results on 721 pages for 'build automation'.

Page 75/721 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How to troubleshoot errors with TeamCity

    - by Tomas Lycken
    I'm following this guide to set up a small environment for source control and automated builds - mostly for learning what it is and how it works, but also for using in those of my hobby projects that I believe will actually be useful some day. However, at the step where he commits and builds, I fail to get a success status in the TeamCity history log. I keep getting the error described in the stack trace below. I have verified with Windows Explorer that the solution file it can't find is actually there, so I really don't know what to do. How do I fix/troubleshoot this? [15:16:06]: Checking for changes [15:16:08]: Clearing temporary directory: C:\Program Files\JetBrains\BuildAgent\temp\buildTmp [15:16:08]: Checkout directory: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:08]: Updating sources: server side checkout... [15:16:08]: [Updating sources: server side checkout...] Building incremental patch for VCS root: DemoProjects [15:16:09]: [Updating sources: server side checkout...] Repository sources transferred [15:16:09]: [Updating sources: server side checkout...] Updating C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:10]: Start process: "c:\Program Files\JetBrains\BuildAgent\bin\..\plugins\dotnetPlugin\bin\JetBrains.BuildServer.MsBuildBootstrap.exe" "/workdir:C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588" /msbuildPath:C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe [15:16:10]: in: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:11]: TeamCity MSBuild bootstrap v5.1 Copyright (C) JetBrains s.r.o. [15:16:11]: Application failed with internal error: [15:16:11]: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: System.Exception: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Impl.MSBuildBootstrapFactory.Create(IClientRunArgs args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap.Core\src\Impl\MSBuildBootstrapFactory.cs:line 25 [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Program.Run(String[] _args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap\src\Program.cs:line 66 [15:16:11]: Process exited with code -11 [15:16:11]: Build finished

    Read the article

  • Programmatically convert *.odt file to MS Word *.doc file using an OpenOffice.org basic macro

    - by Chen Levy
    I am trying to build a reStructuredText to MS Word document tool-chain, so I will be able to save only the rst sources in version control. So far I -- Have rst2odt.py to convert reStructuredText to OpenOffice.org Writer format. Next I want to use the most recent OpenOffice.org (currently 3.1) that do a pretty decent work of generating a Word 97/2000/XP document, so I wrote the macro: sub ConvertToWord(file as string) rem ---------------------------------------------------------------------- rem define variables dim document as object dim dispatcher as object rem ---------------------------------------------------------------------- rem get access to the document document = ThisComponent.CurrentController.Frame dispatcher = createUnoService("com.sun.star.frame.DispatchHelper") rem ---------------------------------------------------------------------- dim odf(1) as new com.sun.star.beans.PropertyValue odf(0).Name = "URL" odf(0).Value = "file://" + file + ".odt" odf(1).Name = "FilterName" odf(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:Open", "", 0, odf()) rem ---------------------------------------------------------------------- dim doc(1) as new com.sun.star.beans.PropertyValue doc(0).Name = "URL" doc(0).Value = "file://" + file + ".doc" doc(1).Name = "FilterName" doc(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:SaveAs", "", 0, doc()) end sub But when I executing it: soffice "macro:///Standard.Module1.ConvertToWord(/path/to/odt_file_wo_ext)" I get a: "BASIC runtime error. Property or method not found." message On the line: document = ThisComponent.CurrentController.Frame And when I comment that line, the above invocation complete without error, but do nothing. I guess I need to somehow set the value of document to a newly created instance, but I don't know how to do it. Or am I going at it at a completely backward way? P.S. I will consider JODConverter as a fallback, because I try to minimize my dependencies.

    Read the article

  • Sublime Text 2 Keyboard shortcut to open file in Chrome/firefox in windows

    - by samdroid
    I followed the instruction for windows 7 to setup chrome. No luck! { "cmd":["C:\Program Files (x86)\Google\Chrome\Application", "$C:\Users\gmu\Desktop\June_15_2012"] } after entering the file location/path under what format should I have to save. I am a noobie. sorry to ask this question. Anything helps! If I press f7 getting the following message Error trying to parse build system: Invalid escape in C:\Users\gmu\AppData\Roaming\Sublime Text2\Packages\User\Chrome.sublime-build:2:9 Thanks

    Read the article

  • NAnt doesn't recognize patternset type

    - by veljkoz
    I've downloaded the new version of NAnt 0.91 Alpha 1 release and it doesn't seem to recognize the patternset as in: <?xml version="1.0" encoding="UTF-8" ?> <project name="Testing project" default="testMe"> <patternset id="build.files"> <include name="*.dll" /> </patternset> <target name="testMe"> <echo message="hi" /> </target> </project> The error I get when running nant /f:mytest.build is: Invalid element <patternset>. Unknown task or datatype. Am I missing something?

    Read the article

  • How to share code with continuous integration

    - by alchemical
    I've just started working in a continuous integration environment (TeamCity). I understand the basic idea of not getting so abstracted out in your code that you are never able to build it to test functionality, etc. However, when there is deep coding going on, occasionally it will take me several days to get buildable code--but in the interim other team members may need to see my code. If I check the code in, it breaks the build. However, if I don't check it in, my team members are unable to see the most recent work. I'm wondering how this situation is best dealt with.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • Xcode is not building the Binary

    - by Stephen Furlani
    Hello, Xcode is doing something bizzare which I at one point in time fixed but now for the life of me I can't figure out what's wrong. Xcode is building my project fine - no errors on a clean-all build. All my product names and info.plists agree, all the settings appear to be correct. I've only got the one build configuration (I always delete all of them except when I got to actually release something - waay to many invisible problems with these things). Except that it is not generating binaries for my code. Eh wot? I have recently checked the code out on a new computer, and I checked all the paths and everything exists where it should. any help is appreciated. It is not throwing any errors and neither the binary for the .app nor the .plugin (project.app/Contents/MacOS/THERE IS NOTHING HERE). Thanks!!! -Stephen

    Read the article

  • In TFS, is there a maximum amount of workspaces which can be used for a user?

    - by Gerrie Schenck
    I'm currently in the process of creating a bunch of new build scripts for our platform. Things went okay until I encountered the following error: D:\TFS\WorkingDir\BuildType\TFSBuild.proj(173,5): error MSB4018: Microsoft.TeamFoundation.VersionControl.Client.WorkspaceNotFoundException: TF14061: The workspace BUILDMACHINENAME_9;BUILDMACHINENAME\TFSService does not exist. When I take a look at the list of workspaces (with Team Foundation Sidekicks) I see there are a bunch of BUILDMACHINENAME_xxx workspaces, where xxx is a number ranging from 1 to 8. What I'm thinking is that TFS reaches some kind of limit (10 probably) of the amount of workspaces it can create for a certain owner, and thus fails to create a workspace for the build automatically. Can this be the case? Anyone else encountered this?

    Read the article

  • Visual Studio "Any CPU" target

    - by galets
    I have some confusion related to the .NET platform build options in VS 2008 Does anyone have a clear understanding what does "Any CPU" compilation target is and what sort of files it generates? I examined the output executable of this "Any CPU" build and found that they are (who would not see that coming!) the x86 executables. So, is there any the difference between targeting executable to x86 vs "Any CPU"? Another thing that I noticed, is that managed C++ projects do not have this platform as option. I'm wondering why is that. Does that mean that my suspicion about "Any CPU" executables being plain 32-bit ones is right?

    Read the article

  • Building a specific piece of Android platform?

    - by Chrisc
    Hi, I have been trying to build only the "/libcore" directory of the Android platform. When I try mmm libcore I end up with the following output: ============================================ PLATFORM_VERSION_CODENAME=REL PLATFORM_VERSION=2.1-update1 TARGET_PRODUCT=generic TARGET_BUILD_VARIANT=eng TARGET_SIMULATOR=false TARGET_BUILD_TYPE=release TARGET_ARCH=arm HOST_ARCH=x86 HOST_OS=linux HOST_BUILD_TYPE=release BUILD_ID=ECLAIR ============================================ make: Entering directory `/home/chris/android/platform' target Prebuilt: (out/target/product/generic/system/etc/security/cacerts.bks) host Prebuilt: run-core-tests-on-ri (out/host/linux-x86/obj/EXECUTABLES/run-core-tests-on-ri_intermediates/run-core-tests-on-ri) target Prebuilt: run-core-tests (out/target/product/generic/obj/EXECUTABLES/run-core-tests_intermediates/run-core-tests) Copy: out/target/product/generic/system/etc/apns-conf.xml Copying: out/target/common/obj/JAVA_LIBRARIES/core_intermediates/classes-full-debug.jar Copying: out/target/common/obj/JAVA_LIBRARIES/core-tests_intermediates/classes-full-debug.jar /bin/bash: jar: command not found make: *** [out/host/common/core-tests.jar] Error 127 make: *** Deleting file `out/host/common/core-tests.jar' make: Leaving directory `/home/chris/android/platform' Does anyone have any suggestions on what Error 127 is, or another method I can go about building "libcore" without having to build the entire platform again? Thanks, Chris

    Read the article

  • SCons: How to use the same builders for multiple variants (release/debug) of a program

    - by OK
    The SCons User Guide tells about the usage of Multiple Construction Environments to build build multiple versions of a single program and gives the following example: opt = Environment(CCFLAGS = '-O2') dbg = Environment(CCFLAGS = '-g') o = opt.Object('foo-opt', 'foo.c') opt.Program(o) d = dbg.Object('foo-dbg', 'foo.c') dbg.Program(d) Instead of manually assigning different names to the objects compiled with different environments, VariantDir() / variant_dir sounds like a better solution... But if I place the Program() builder inside the SConscript: Import('env') env.Program('foo.c') How can I export different environments to the same SConscript file? opt = Environment(CCFLAGS = '-O2') dbg = Environment(CCFLAGS = '-g') SConscript('SConscript', 'opt', variant_dir='release') #'opt' --> 'env'??? SConscript('SConscript', 'dbg', variant_dir='debug') #'dbg' --> 'env'??? Unfortunately the discussion in the SCons Wiki does not bring more insight to this topic. Thanks for your input!

    Read the article

  • Problem with building with csc task in Ant

    - by Wing C. Chen
    I have an ant build target using csc: <target name="compile"> <echo>Starting compiling ServiceLauncher</echo> <csc optimize="true" debug="true" warnLevel="1" unsafe="false" targetType="exe" failonerror="true" incremental="false" mainClass = "ServiceLauncher.Launcher" srcdir="ServiceLauncher/Launcher/" outputfile="ServiceLauncher.exe" > <reference file="libs/log4net.dll"/> <define name="RELEASE"/> </csc> </target> When I run it, the following exception comes up: csc failed: java.io.IOException: Cannot run program "csc": CreateProcess error=2, The system cannot find the file specified However, it runs without the exception but never correctly builds the .exe file, when I manually add in an empty ServiceLauncher.exe. How can I correctly build this .Net project "ServiceLauncher"?

    Read the article

  • Redis version on Cloudbees is out of date?

    - by Alan Krueger
    I'm setting up an OSS build in Cloudbees with /usr/sbin/redis-server being started as one of the build tasks: + /usr/sbin/redis-server [204] 04 Nov 03:52:58 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [204] 04 Nov 03:52:58 * Server started, Redis version 2.0.3 The (Redis site)[http://redis.io/download] shows 2.6.2 to be the current version and 2.4.17 as "legacy". On the extended downloads page, version 2.0.3 is deprecated. Am I launching it the wrong server executable, or are there plans to support a more recent version of Redis?

    Read the article

  • Disable system sleep during long builds

    - by Paul Alexander
    From time to time I need to run a full build of the entire tool chain for our software on my development machine. To save on power my I've got my dev machine set to go sleep after 20 minutes of inactivity. Building the full tool chain can take up to an hour and I'll often just go to lunch. However, if I forget to disable sleep I can return to a sleeping machine with the build only partially complete. What I'm looking for is a way to automatically disable sleep while MSBuild is running. Does anyone know of a simple way of doing this?

    Read the article

  • How do I tell nant to only call csc when there are cs files in to compile?

    - by rob_g
    In my NAnt script I have a compile target that calls csc. Currently it fails because no inputs are specified: <target name="compile"> <csc target="library" output="${umbraco.bin.dir}\Mammoth.${project::get-name()}.dll"> <sources> <include name="project/*.cs" /> </sources> <references> </references> </csc> </target> How do I tell NAnt to not execute the csc task if there are no CS files? I read about the 'if' attribute but am unsure what expression to use with it, as ${file::exists('*.cs')} does not work. The build script is a template for Umbraco (a CMS) projects and may or may not ever have .cs source files in the project. Ideally I would like to not have developers need to remember to modify the NAnt script to include the compile task when .cs files are added to the project (or exclude it when all .cs files are removed).

    Read the article

  • Re-execute target when specified as dependency to multiple rules

    - by andrew
    I have the following GNU makefile: .PHONY a b c d a: b c b: d c: d d: echo HI I would like the target 'd' to be run twice -- since it is specified as a dependency by both b & c. Unfortunately, the target 'd' will be executed only once. The output of running make will simply be 'HI', instead of 'HI HI'. How can I fix this? Thanks! To Clarify, the goal is something like this: subdirs = a b c build: x y x: target=build x: $(subdirs) y: target=prepare y: $(subdirs) $(subdirs): $(make) -f $@/makefile $(target)

    Read the article

  • C++ - Resources in static library question

    - by HardCoder1986
    Hello! This isn't a duplicate of http://stackoverflow.com/questions/531502/vc-resources-in-a-static-library because it didn't help :) I have a static library with TWO .rc files in it's project. When I build my project using the Debug configuration, I retrieve the following error (MSVS2008): fatal error LNK1241: resource file res_yyy.res already specified Note, that this happens only in Debug and Release library builds without any troubles. The command line for Resources page in project configuration looks the same for every build: /fo"...(Path here)/Debug/project_name.res" /fo"...(Path here)/Release/project_name.res" and I can't understand what's the trouble. Any ideas? UPDATE I don't know why this happens, but when I turn "Use Link-Time Code Generation" option on the problem goes away. Could somebody explain why does this happen? I feel like MS-compiler is doing something really strange here. Thanks.

    Read the article

  • Tips on how to deploy C++ code to work every where

    - by User1
    I'm not talking about making portable code. This is more a question of distribution. I have a medium-sized project. It has several dependencies on common libraries (eg openssl, zlib, etc). It compiles fine on my machine and now it's time to give it to the world. Essentially build engineering at its finest. I want to make installers for Windows, Linux, MacOSX, etc. I want to make a downloadable tar ball that will make the code work with a ./configure and a make (probably via autoconf). It would be icing on the cake to have a make option that would build the installers..maybe even cross-compile so a Windows installer could be built in Linux. What is the best strategy? Where can I expect to spend the most time? Should the prime focus be autoconf or are there other tools that can help?

    Read the article

  • Can I ask ANT to look into .classpath for external jars?

    - by kunjaan
    Right now I have <!-- Classpath declaration --> <path id="project.classpath"> <fileset dir="${lib.dir}"> <include name="**/*.jar" /> <include name="**/*.zip" /> </fileset> </path> <!-- Compile Java source --> <target name="compile" depends="clean"> <mkdir dir="${build.dir}" /> <javac srcdir="${src.java.dir}" destdir="${build.dir}" nowarn="on"> <classpath refid="project.classpath" /> </javac> </target> Is there someway I can tell ANT to look into the eclipse's .classpath and figure out the external jars?

    Read the article

  • Build a gem with native extension (Gem::Installer::ExtensionBuildError)

    - by Arnaud Leymet
    I have the following configuration: uname -a : Linux 2.6.24.2 i686 GNU/Linux (Ubuntu) ruby -v : ruby 1.9.0 (2007-12-25 revision 14709) [i486-linux] rails -v : Rails 3.0.0.beta3 gem -v : 1.3.5 rake --version : rake, version 0.8.7 make -v : GNU Make 3.81 gem env : RUBYGEMS VERSION: 1.3.5 RUBY VERSION: 1.9.0 (2007-12-25 patchlevel 0) [i486-linux] INSTALLATION DIRECTORY: /usr/lib/ruby1.9/gems/1.9.0 RUBY EXECUTABLE: /usr/bin/ruby1.9 EXECUTABLE DIRECTORY: /usr/bin RUBYGEMS PLATFORMS: ruby x86-linux GEM PATHS: /usr/lib/ruby1.9/gems/1.9.0 /root/.gem/ruby/1.9.0 GEM CONFIGURATION: :update_sources = true :verbose = true :benchmark = false :backtrace = false :bulk_threshold = 1000 REMOTE SOURCES: http://gems.rubyforge.org/ And when I try this simple command: gem install nokogiri Here is what I get: # gem install nokogiri Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... yes checking for exsltFuncRegister() in -lexslt... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetValidStructuredErrors()... yes checking for xmlSchemaSetValidStructuredErrors()... yes checking for xmlSchemaSetParserStructuredErrors()... yes creating Makefile make cc -I. -I/usr/include/libxml2 -I/usr/include -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETPARSERSTRUCTUREDERRORS -I/opt/local/include/ -I/opt/local/include/libxml2 -I/opt/local/include -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -g -DXP_UNIX -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -o xml_document_fragment.o -c xml_document_fragment.c In the included file starting at ./nokogiri.h:75, From ./xml_document_fragment.h:4, From xml_document_fragment.c:1: ./xml_document.h:5:16: error: st.h : No file or folder with this type make: *** [xml_document_fragment.o] Error 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out The "gem_make.out" file contains the exact same information as described above. If I try with another gem: gem install gherkin Here is what I get: u# gem install gherkin Building native extensions. This could take a while... ERROR: Error installing gherkin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for main() in -lc... yes creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -o gherkin_lexer_ar.o -c gherkin_lexer_ar.c /Users/aslakhellesoy/scm/gherkin/tasks/../ragel/i18n/ar.c.rl:11:16: erreur: re.h : Aucun fichier ou dossier de ce type make: *** [gherkin_lexer_ar.o] Erreur 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30/ext/gherkin_lexer_ar/gem_make.out In fact whenever I try to install a gem with native extension, I get the same type of error. Would that ring a bell to anyone?

    Read the article

  • SQL Server 2008: Using Multiple dts Ranges to Build a Set of Dates

    - by raoulcousins
    I'm trying to build a query for a medical database that counts the number of patients that were on at least one medication from a class of medications (the medications listed below in the FAST_MEDS CTE) and had either: 1) A diagnosis of myopathy (the list of diagnoses in the FAST_DX CTE) 2) A CPK lab value above 1000 (the lab value in the FAST_LABS CTE) and this diagnosis or lab happened AFTER a patient was on a statin. The query I've included below does that under the assumption that once a patient is on a statin, they're on a statin forever. The first CTE collects the ids of patients that were on a statin along with the first date of their diagnosis, the second those with a diagnosis, and the third those with a high lab value. After this I count those that match the above criteria. What I would like to do is drop the assumption that once a patient is on a statin, they're on it for life. The table edw_dm.patient_medications has a column called start_dts and end_dts. This table has one row for each prescription written, with start_dts and end_dts denoting the start and end date of the prescription. End_dts could be null, which I'll take to assume that the patient is currently on this medication (it could be a missing record, but I can't do anything about this). If a patient is on two different statins, the start and ends dates can overlap, and there may be multiple records of the same medication for a patient, as in a record showing 3-11-2000 to 4-5-2003 and another for the same patient showing 5-6-2007 to 7-8-2009. I would like to use these two columns to build a query where I'm only counting the patients that had a lab value or diagnosis done during a time when they were already on a statin, or in the first n (say 3) months after they stopped taking a statin. I'm really not sure how to go about rewriting the first CTE to get this information and how to do the comparison after the CTEs are built. I know this is a vague question, but I'm really stumped. Any ideas? As always, thank you in advance. Here's the current query: WITH FAST_MEDS AS ( select distinct statins.mrd_pt_id, min(year(statins.order_dts)) as statin_yr from edw_dm.patient_medications as statins inner join mrd.medications as mrd on statins.mrd_med_id = mrd.mrd_med_id WHERE mrd.generic_nm in ( 'Lovastatin (9664708500)', 'lovastatin-niacin', 'Lovastatin/Niacin', 'Lovastatin', 'Simvastatin (9678583966)', 'ezetimibe-simvastatin', 'niacin-simvastatin', 'ezetimibe/Simvastatin', 'Niacin/Simvastatin', 'Simvastatin', 'Aspirin Buffered-Pravastatin', 'aspirin-pravastatin', 'Aspirin/Pravastatin', 'Pravastatin', 'amlodipine-atorvastatin', 'Amlodipine/atorvastatin', 'atorvastatin', 'fluvastatin', 'rosuvastatin' ) and YEAR(statins.order_dts) IS NOT NULL and statins.mrd_pt_id IS NOT NULL group by statins.mrd_pt_id ) select * into #meds from FAST_MEDS ; --return patients who had a diagnosis in the list and the year that --diagnosis was given with FAST_DX AS ( SELECT pd.mrd_pt_id, YEAR(pd.init_noted_dts) as init_yr FROM edw_dm.patient_diagnoses as pd inner join mrd.diagnoses as mrd on pd.mrd_dx_id = mrd.mrd_dx_id and mrd.icd9_cd in ('728.89','729.1','710.4','728.3','729.0','728.81','781.0','791.3') ) select * into #dx from FAST_DX; --return patients who had a high cpk value along with the year the cpk --value was taken with FAST_LABS AS ( SELECT pl.mrd_pt_id, YEAR(pl.order_dts) as lab_yr FROM edw_dm.patient_labs as pl inner join mrd.labs as mrd on pl.mrd_lab_id = mrd.mrd_lab_id and mrd.lab_nm = 'CK (CPK)' WHERE pl.lab_val between 1000 AND 999998 ) select * into #labs from FAST_LABS; -- count the number of patients who had a lab value or a medication -- value taken sometime AFTER their initial statin diagnosis select count(distinct p.mrd_pt_id) as ct from mrd.patient_demographics as p join #meds as m on p.mrd_pt_id = m.mrd_pt_id AND ( EXISTS ( SELECT 'A' FROM #labs l WHERE p.mrd_pt_id = l.mrd_pt_id and l.lab_yr >= m.statin_yr ) OR EXISTS( SELECT 'A' FROM #dx d WHERE p.mrd_pt_id = d.mrd_pt_id AND d.init_yr >= m.statin_yr ) )

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >