Search Results

Search found 4231 results on 170 pages for 'pre buildevent'.

Page 137/170 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • function taking in an input image and different kernel size

    - by drifterOcean19
    I have this filtering function that takes an input image, performs convolution using a given kernel, and returns the resulting image. However, I can't seem to work it out how to make it takes different kernel sizes.For example instead of pre-defined 3x3 kernel as below in the code, it could instead take 5x5 or 7x7. and then the user could input the type of kernel/filter they want(Depending on the intended effect). I can't seem to put my head around it. i'm quite new to matlab. function [newImg] = kernelFunc(imgB) img=imread(imgB); figure,imshow(img); img2=zeros(size(img)+2); newImg=zeros(size(img)); for rgb=1:3 for x=1:size(img,1) for y=1:size(img,2) img2(x+1,y+1,rgb)=img(x,y,rgb); end end end for rgb=1:3 for i= 1:size(img2,1)-2 for j=1:size(img2,2)-2 window=zeros(9,1); inc=1; for x=1:3 for y=1:3 window(inc)=img2(i+x-1,j+y-1,rgb); inc=inc+1; end end kernel=[1;2;1;2;4;2;1;2;1]/16; med=window.*kernel; disp(med); med=sum(med); med=floor(med); newImg(i,j,rgb)=med; end end end newImg=uint8(newImg); figure,imshow(newImg); end

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • Flex 3 - Image cache

    - by BS_C3
    Hello Community. I'm doing an Image Cache following this method: http://www.brandondement.com/blog/2009/08/18/creating-an-image-cache-with-actionscript-3/ I copied the two as classes, renaming them CachedImage and CachedImageMap. The thing is that I don't want to store the image after being loaded a first time, but while the application is being loaded. For that, I've created a function that is called by the application pre-initialize event. This is how it looks: private function loadImages():void { var im:CachedImage = new CachedImage; var sources:ArrayCollection = new ArrayCollection; for each(var cs in divisionData.division.collections.collection.collectionSelection) { sources.addItem(cs.toString()); } for each(var se in divisionData.division.collections.collection.searchEngine) { sources.addItem(se.toString()); } for each( var source:String in sources) { im.source = source; im.load(source); } } The sources are properly retrieved. However, even if I use the load method, I do not get the "complete" event... As if the image is not being loaded... How is that? Any help would be appreciated. Thanks in advance. Regards, BS_C3

    Read the article

  • Why do I get a Illegal Access Error when running my Android tests?

    - by Janusz
    I get the following stack trace when running my Android tests on the Emulator: java.lang.NoClassDefFoundError: client.HttpHelper at client.Helper.<init>(Helper.java:14) at test.Tests.setUp(Tests.java:15) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:164) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:151) at android.test.InstrumentationTestRunner.onStart(InstrumentationTestRunner.java:425) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:1520) Caused by: java.lang.IllegalAccessError: cross-loader access from pre-verified class at dalvik.system.DexFile.defineClass(Native Method) at dalvik.system.DexFile.loadClass(DexFile.java:193) at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:203) at java.lang.ClassLoader.loadClass(ClassLoader.java:573) at java.lang.ClassLoader.loadClass(ClassLoader.java:532) ... 11 more I run my tests from an extra project. And it seems there are some problems with loading the classes from the other project. I have run the tests before but now they are failing. The project under tests runs without problems. Line 14 of the Helper Class is: this.httpHelper = new HttpHelper(userProfile); I start a HttpHelper class that is responsible for executing httpqueries. I think somehow this helper class is not available anymore, but I have no clue why.

    Read the article

  • http_access.log on WebSphere 6.1.0.29

    - by DavidG
    I am running WebSphere 6.1.0.29 and I need to track the requests being made to an Enterprise Application. Previously I did this by routing the requests through a proxy server, but I need to repeat the exercise and I figure there must be a simpler way. Does anyone know how to enable HTTP access logging? I have been through the console an thought I had enabled http_access.log and http_error.log via: Application servers server1 HTTP error and NCSA access logging (where 'server1' is the application server) I've enabled the service at startup, and ticked the boxes to enable access logging and error logging - however... nothing has happened. I have restarted the server, restarted the Enterprise apps and even did a "find . -name" for the log files - but they don't seem to be anywhere on the system. I saw on a JavaRanch thread someone suggested writing a custom filter for requests in an application, but this seems like wild overkill - plus I am doing the logs to test a pre-built binary, so I don't want to mess with the code. Anyone have any ideas/suggestions? Help! :-)

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • How do i programmatically access the face cache in Windows Live Photo Gallery?

    - by acorderob
    I'm not talking about the "people tags" embeded in the XMP packets of JPEGs. I'm talking about the face database used to recognize new faces. I want to add to my program the option to recognize faces using the already trained database of WLPG. I managed to use the API (a type library dll) to detect faces, but to recognize them it needs an Exemplar Cache object that is not available in the same API. I could create my own object, but i want to use the already existing one to avoid duplicate training for the user. I know the database is in C:\Users\\AppData\Local\Microsoft\Windows Live Photo Gallery and that it is in an SQL Server Compact format. I tried to open the database with Visual Studio 2010, but it says that it is in an older version (pre-3.5) and needs to be upgraded. I don't want to change the database, just read it. I don't know how the WPLG reads it since apparently i don't have the correct OLEDB provider version. I would also prefer to read it without accesing the database directly but i don't see any DLL that exports that functionality. BTW, i'm using Delphi 2010. Any ideas?

    Read the article

  • Visual C++ doesn't operator<< overload

    - by PierreBdR
    I have a vector class that I want to be able to input/output from a QTextStream object. The forward declaration of my vector class is: namespace util { template <size_t dim, typename T> class Vector; } I define the operator<< as: namespace util { template <size_t dim, typename T> QTextStream& operator<<(QTextStream& out, const util::Vector<dim,T>& vec) { ... } template <size_t dim, typename T> QTextStream& operator>>(QTextStream& in,util::Vector<dim,T>& vec) { .. } } However, if I ty to use these operators, Visual C++ returns this error: error C2678: binary '<<' : no operator found which takes a left-hand operand of type 'QTextStream' (or there is no acceptable conversion) A few things I tried: Originaly, the methods were defined as friends of the template, and it is working fine this way with g++. The methods have been moved outside the namespace util I changed the definition of the templates to fit what I found on various Visual C++ websites. The original friend declaration is: friend QTextStream& operator>>(QTextStream& ss, Vector& in) { ... } The "Visual C++ adapted" version is: friend QTextStream& operator>> <dim,T>(QTextStream& ss, Vector<dim,T>& in); with the function pre-declared before the class and implemented after. I checked the file is correctly included using: #pragma message ("Including vector header") And everything seems fine. Doesn anyone has any idea what might be wrong?

    Read the article

  • Intermittent Issue Writing to Google Appengine Datastore

    - by user242153
    Hi, I have a functioning app and recently have had intermittent problems writing to the datastore. I did not make any relevant code changes, however in the last few days my attempts to write to the datastore sometimes work and sometimes don't. I am trying to save an object that is in a many to one relationship with an existing persisted parent. So, the logic works like this: 1) Parent pulled from the datastore 2) Child created / instantiated using constructor 3) Parent.addSingleChild(child); // the "addSingleChild" method just adds the object argument to the collection of children 4) child.setParent(Parent); // sets the Parent object to the parent field I am using transactions as explained in the documentation ending with "finally {if (tx.isActive()) {tx.rollback(); } }" When the servlet is called, the parent is called from the datastore and the child object is created and added to the many to one mapping to the pre-existing parent. The child should automatically be persisted, since the parent is already persistent, and the child is added to the collection of children that map to the parent. And it worked this way in the past. However, to be sure, i did add a pm.makePersistent(child). Doesn't seem to help, still have the intermittent problem. Any suggestions would be appreciated, and if you need to see the actual code I can post. Thanks

    Read the article

  • trying to append a list, but something breaks

    - by romunov
    I'm trying to create an empty list which will have as many elements as there are num.of.walkers. I then try to append, to each created element, a new sub-list (length of new sub-list corresponds to a value in a. When I fiddle around in R everything goes smooth: list.of.dist[[1]] <- vector("list", a[1]) list.of.dist[[2]] <- vector("list", a[2]) list.of.dist[[3]] <- vector("list", a[3]) list.of.dist[[4]] <- vector("list", a[4]) I then try to write a function. Here is my feeble attempt that results in an error. Can someone chip in what am I doing wrong? countNumberOfWalks <- function(walk.df) { list.of.walkers <- sort(unique(walk.df$label)) num.of.walkers <- length(unique(walk.df$label)) #Pre-allocate objects for further manipulation list.of.dist <- vector("list", num.of.walkers) a <- c() # Count the number of walks per walker. for (i in list.of.walkers) { a[i] <- nrow(walk.df[walk.df$label == i,]) } a <- as.vector(a) # Add a sublist (length = number of walks) for each walker. for (i in i:num.of.walkers) { list.of.dist[[i]] <- vector("list", a[i]) } return(list.of.dist) } > num.of.walks.per.walker <- countNumberOfWalks(walk.df) Error in vector("list", a[i]) : vector size cannot be NA

    Read the article

  • Apache redirection problem!!!!

    - by vikas
    Hi guys, I am setting up a pre-built website built in php. The site was actually hosted on the linux server. Now I am trying to set it up on a Window machine with WAMP server. In this website almost every page request passes through a particular file called redirect(which is basically a php file without extension). Now the problem is that when I inspected the configuration(httpd.conf, apache.conf,.htaccess, vhost.conf etc) of the apache server on the linux machine, I nowhere found the redirect rules for doing so. Neither mod_rewrite nor mod_alias rules for this redirection were found there. But is still redirects the request properly. I also noticed that Zend Framework library is there in the exact same directory where the redirect file is. This library is included in the include_path in php.ini. However, the web site is still not developed using Zend MVC and I have seen NO proof of ZEND being used there. So I am really confused how this redirection is working there? I am unable to set up this on window machine without rewrite rules of mod_rewrite or mod_alias. Do you guys know any alternative of both the said modules for redirection? I know the site is really weird, but i have to set it up. :) Thanks in advance for your help.

    Read the article

  • Firefox add-on development: Register global dynamic custom keyboard shortcuts

    - by dezwart
    I have been tasked with developing a Firefox add-on that is capable of registering global keyboard shortcuts (ones that will work throughout all areas of Firefox) that will open up the side-bar and execute an XMLRPC request based on previously recorded input. The idea here is that there will be many potential XMLRPC requests that the user will want to execute via a keyboard shortcut. Currently, the add-on is capable of handling pre-defined static keyboard shortcuts via the Firefox overlay. What I would like to achieve, is to allow the user to register their own dynamic custom keyboard shortcut. There is an add-on that currently has some of this functionality, called Keyconfig. I'm not keen on having to ask users to install a second add-on to define their own shortcuts. It also seems that using the dynamic keyboard shortcut registration method in Keyconfig would require the user to close all Firefox windows before the dynamic shortcut is made available. What I would like to know is: Is an XPCOM component the best way to register dynamic keyboard shortcuts from within a Firefox add-on? Is there a way to register the keyboard shortcut so that it is immediately available to all Firefox windows, without having to close the windows beforehand?

    Read the article

  • Initial capacity of collection types, i.e. Dictionary, List

    - by Neil N
    Certain collection types in .Net have an optional "Initial Capacity" constructor param. i.e. Dictionary<string, string> something = new Dictionary<string,string>(20); List<string> anything = new List<string>(50); I can't seem to find what the default initial capacity is for these objects on MSDN. If I know I will only be storing 12 or so items in a dictionary, doesn't it make sense to set the initial capacity to something like 20? My reasoning is, assuming the capacity grows like it does for a StringBuiler, which doubles each time the capacity is hit, and each re-allocation is costly, why not pre-set the size to something you know will hold your data, with some extra room just in case? If the initial capacity is 100, and I know I will only need a dozen or so, it seems as though the rest of that allocated RAM is allocated for nothing. Please spare me the "premature optimization" speil for the O(n^n)th time. I know it won't make my apps any faster or save any meaningful amount of memory, this is mostly out of curiosity.

    Read the article

  • Process xml-like log file queue

    - by Zsolt Botykai
    Hi all, first of all: I'm not a programmer, never was, although had learn a lot during my professional carreer as a support consultant. Now my task is to process - and create some statistics about a constantly written and rapidly growing XML like log file. It's not valid XML, because it does not have a proper <root> element, e.g. the log looks like this: <log itemdate="somedate"> <field id="0" /> ... </log> <log itemdate="somedate+1"> <field id="0" /> ... </log> <log itemdate="somedate+n"> <field id="0" /> ... </log> E.g. I have to count all the items with field id=0. But most of the solutions I had found (e.g. using XPath) reports an error about the garbage after the first closing </log>. Most probably I can use python (2.6, although I can compile 3.x as well), or some really old perl version (5.6.x), and recently compiled xmlstarlet which really looks promising - I was able to create the statistics for a certain period after copying the file, and pre- & appending the opening and closing root element. But this is a huge file and copying takes time as well. Isn't there a better solution? Thanks in advance!

    Read the article

  • API-based solutions for sending payments to people without bank accounts

    - by Tauren
    I'm looking for inexpensive ways to send payments to hundreds or thousands of individual contractors, even if they do not have a bank account. Currently I only need to support payment in the USA, but may eventually be international. Here's the scenario: I offer a service that allows an organization or manager-type person to coordinate contractors for very short term jobs. These jobs are typically only an hour or two in length. A contractor may get only one job over an entire month, several jobs spread out over a month, multiple jobs on a single day, or any other combination. Thus, a single contractor could earn as little as one job's payment up to potentially payment for dozens. Payment for a month could be as little as $10 up to $1000's. Right now, the system provides payroll reports to the manager and it is the manager's responsibility to produce checks, stuff envelopes, and send mail via the US postal service. I'd like to remove this burden from the manager and have all the payments taken care of for them automatically by the system. I'm not sure where to start or what the best options would be. I'm starting to look into the following solutions, but don't know specifics yet and would like some advice before pursuing them. I'd also like to hear about other ideas or suggestions. PayPal (Send Money, Adaptive Payments, x.com, other???) Amazon (Flexible Payments System?) Fund some sort of pre-paid debit card? Web service with API that mails checks for you? Direct deposit via a bank API (for users with bank accounts)? The problem is that many of these contractors may not be able to obtain bank accounts or credit cards within the USA. I don't mind doing a hybrid of solutions, but are there any that would work well with this issue? I want the solution to be easy to use for the contractors, meaning that they can get the money easily (via check in the mail, debit card ATM withdrawal, etc.)

    Read the article

  • Issue reading in a cell from Excel with Apache POI

    - by Nick
    I am trying to use Apache POI to read in old (pre-2007 and XLS) Excel files. My program goes to the end of the rows and iterates back up until it finds something that's not either null or empty. Then it iterates back up a few times and grabs those cells. This program works just fine reading in XLSX and XLS files made in Office 2010. I get the following error message: Exception in thread "main" java.lang.NumberFormatException: empty String at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at java.lang.Double.parseDouble(Unknown Source) at the line: num = Double.parseDouble(str); from the code: str = cell.toString(); if (str != "" || str != null) { System.out.println("Cell is a string"); num = Double.parseDouble(str); } else { System.out.println("Cell is numeric."); num = cell.getNumericCellValue(); } where the cell is the last cell in the document that's not empty or null. When I try to print the first cell that's not empty or null, it prints nothing, so I think I'm not accessing it correctly.

    Read the article

  • Random forests for short texts

    - by Jasie
    Hi all, I've been reading about Random Forests (1,2) because I think it'd be really cool to be able to classify a set of 1,000 sentences into pre-defined categories. I'm wondering if someone can explain to me the algorithm better, I think the papers are a bit dense. Here's the gist from 1: Overview We assume that the user knows about the construction of single classification trees. Random Forests grows many classification trees. To classify a new object from an input vector, put the input vector down each of the trees in the forest. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes (over all the trees in the forest). Each tree is grown as follows: If the number of cases in the training set is N, sample N cases at random - but with replacement, from the original data. This sample will be the training set for growing the tree. If there are M input variables, a number m « M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing. Each tree is grown to the largest extent possible. There is no pruning. So, does this look right? I'd have N = 1,000 training cases (sentences), M = 100 variables (let's say, there are only 100 unique words across all sentences), so the input vector is a bit vector of length 100 corresponding to each word. I randomly sample N = 1000 cases at random (with replacement) to build trees from. I pick some small number of input variables m « M, let's say 10, to build a tree off of. Do I build tree nodes randomly, using all m input variables? How many classification trees do I build? Thanks for the help!

    Read the article

  • Decompress a GZipped response from the server (Socket)

    - by Lith
    Umm, ok, after sending some data to the server, noting this particular part: "Accept-Encoding: gzip,deflate\r\n" I am getting the following response: HTTP/1.1 200 OK Server: nginx Date: Fri, 09 Apr 2010 23:25:27 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.2.8 Expires: Mon, 26 Jul 1997 05:00:00 GMT Last-Modified: Fri, 09 Apr 2010 23:25:27 GMT Cache-Control: no-store, no-cache, must-revalidate Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Content-Encoding: gzip Vary: Accept-Encoding 7aa ??U-?Rh?%?2?w??PM]??7?qZ?K?)???2?&??m???"q??/p9w?????x?[`tA!G???G?5z??????a>k????????Q ???N?? ('??f?,(??Y:5B???-?)?3x^0e:j?`,???**???F>G)?2????@???b??????A?k???Ar?n? But how do I decompress it? Note that I am using the Socket Class to do all the work. I know how to decompress it, but the problem here lies in the fact that I cannot separate the Packet from the GZipped data, psuedo-psuedocode (or whatever) on how I do it: Socket sends packet; Socket reads response from server, stores into a ByteArray; Create MemoryStream, use ByteArray; Create GZipStream, use Memorystream; now the problem occurs; I am getting the following Error: System.IO.InvalidDataException The magic number in GZip header is not correct. Make sure you are passing in a GZip stream. I hope the explanation is clear enough __.

    Read the article

  • Use of putty in command line

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij

    Read the article

  • How to add an extra plist property using CMake?

    - by Jesse Beder
    I'm trying to add the item <key>UIStatusBarHidden</key><true/> to my plist that's auto-generated by CMake. For certain keys, it appears there are pre-defined ways to add an item; for example: set(MACOSX_BUNDLE_ICON_FILE ${ICON}) But I can't find a way to add an arbitrary property. I tried using the MACOSX_BUNDLE_INFO_PLIST target property as follows: I'd like the resulting plist to be identical to the old one, except with the new property I want, so I just copied the auto-generated plist and set that as my template. But the plist uses some Xcode variables, which also look like ${foo}, and CMake grumbles about this: Syntax error in cmake code when parsing string <string>com.bedaire.${PRODUCT_NAME:identifier}</string> syntax error, unexpected cal_SYMBOL, expecting } (47) Policy CMP0010 is not set: Bad variable reference syntax is an error. Run "cmake --help-policy CMP0010" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. In any case, I'm not even sure that this is the right thing to do. I can't find a good example or any good documentation about this. Ideally, I'd just let CMake generate everything as before, and just add a single extra line. What can I do?

    Read the article

  • How to structure the tables of a very simple blog in MySQL?

    - by Programmer
    I want to add a very simple blog feature on one of my existing LAMP sites. It would be tied to a user's existing profile, and they would be able to simply input a title and a body for each post in their blog, and the date would be automatically set upon submission. They would be allowed to edit and delete any blog post and title at any time. The blog would be displayed from most recent to oldest, perhaps 20 posts to a page, with proper pagination above that. Other users would be able to leave comments on each post, which the blog owner would be allowed to delete, but not pre-moderate. That's basically it. Like I said, very simple. How should I structure the MySQL tables for this? I'm assuming that since there will be blog posts and comments, I would need a separate table for each, is that correct? But then what columns would I need in each table, what data structures should I use, and how should I link the two tables together (e.g. any foreign keys)? I could not find any tutorials for something like this, and what I'm looking to do is really offer my users the simplest version of a blog possible. No tags, no moderation, no images, no fancy formatting, etc. Just a simple diary-type, pure-text blog with commenting by other users.

    Read the article

  • Why is Grails Searchable Plugin causing errors on Hibernate AutoFlush?

    - by Mark Rogers
    In the Grails 1.2.5 project that I am trying to troubleshoot, we use the Grails Searchable plugin .5.5.1. The problem is that whenever we attempt to index large sets domain classes, Grails keeps throwing: ERROR hibernate.AssertionFailure - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session) org.hibernate.AssertionFailure: collection [domain-class] was not processed by flush() But the domain classes involved have been mapped and used by hibernate without issues outside of the calls to searchable plugin. The use of the searchable plugin goes as follows: Create a compass session with compass.openSession() Begin compass transaction: compassSession.beginTransaction() Then compassSession.create(result.get(0)) is called on an important unindexed domain class Finally compassTransaction.commit() is called to commit the transaction. Goto 2 and process next domain class Between the 3 and 4th Domain class, an autoflush is triggered that throws the error. Can anyone give me any hints about how to solve this problem? Has anyone encountered this problem before? I know that they had a systemic issue with this back in pre .5 versions of the searchable-plugin. Is it possible those issues weren't totally fixed?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >