Search Results

Search found 3938 results on 158 pages for 'goal'.

Page 8/158 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Is it possible to search locally in jqGrid with treeGrid installed

    - by Nehu
    I am using jqGrid with treeGrid. I have added a filterToolbar. I would like to search locally instead of having a server call. The treegrid docs say that, "When we initialize the grid and the data is read, the datatype is automatically set to local." So, is it possible to implement local search with treeGrid. I tried the below configuration, but it is resulting in server calls. My Configuration is var grid = $("#grid").jqGrid({ treeGrid: true, treeGridModel: 'adjacency', ExpandColumn: 'businessAreaName', ExpandColClick : true, url:'agileProgramme/records.do', datatype: 'json', mtype: 'GET', colNames:['Id' , 'Business Area' , 'Investment' , 'Org' , 'Goal' ], colModel:[ /*00*/ {name:'agileProgrammeId',index:'agileProgrammeId', width:0, editable:false,hidden:true}, /*01*/ {name:'businessAreaName',index:'businessAreaName', width:160, editable:false}, /*02*/ {name:'programmeName',index:'programmeName', width:150, editable:false, classes:'link'}, /*03*/ {name:'org',index:'org', width:50, editable:false, classes:'orgHierarchy', sortable : false}, /*04*/ {name:'goal',index:'goal', width:70, editable:false} ], treeReader : { level_field: "level", parent_id_field: "parent", leaf_field: "leaf", expanded_field: "expanded" }, autowidth: true, height: 240, pager: '#pager', sortname: 'id', sortorder: "asc", toolbar:[true,"top"], caption:"TableGridDemo", emptyrecords: "Empty records", jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: false, cell: "cell", id: "agileProgrammeId" } }); And to implement the search toolbar $('#grid').jqGrid('filterToolbar', {stringResult: true,searchOnEnter : true}); Would appreciate any help or any pointer on even if it is possible?

    Read the article

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • Scheduling of tasks to a single resource using Prolog

    - by Reed Debaets
    I searched through here as best I could and though I found some relevant questions, I don't think they covered the question at hand: Assume a single resource and a known list of requests to schedule a task. Each request includes a start_after, start_by, expected_duration, and action. The goal is to schedule the tasks for execution as soon as possible while keeping each task scheduled between start_after and start_by. I coded up a simple prolog example that I "thought" should work but I've been unfortunately getting errors during run time: "=/2: Arguments are not sufficiently instantiated". Any help or advice would be greatly appreciated startAfter(1,0). startAfter(2,0). startAfter(3,0). startBy(1,100). startBy(2,500). startBy(3,300). duration(1,199). duration(2,199). duration(3,199). action(1,'noop1'). action(2,'noop2'). action(3,'noop3'). can_run(R,T) :- startAfter(R,TA),startBy(R,TB),T>=TA,T=<TB. conflicts(T,R1,T1) :- duration(R1,D1),T=<D1+T1,T>T1. schedule(R1,T1,R2,T2,R3,T3) :- can_run(R1,T1),\+conflicts(T1,R2,T2),\+conflicts(T1,R3,T3), can_run(R2,T2),\+conflicts(T2,R1,T1),\+conflicts(T2,R3,T3), can_run(R3,T3),\+conflicts(T3,R1,T1),\+conflicts(T3,R2,T2). % when traced I *should* see T1=0, T2=400, T3=200 Edit: conflicts goal wasn't quite right: needed extra TT1 clause. Edit: Apparently my schedule goal works if I supply valid Request,Time pairs ... but I'm stucking trying to force prolog to find valid values for T1..3 when given R1..3?

    Read the article

  • SSH with Perl using file handles, not Net::SSH

    - by jorge
    Before I ask the question: I can not use cpan module Net::SSH, I want to but can not, no amount of begging will change this fact I need to be able to open an SSH connection, keep it open, and read from it's stdout and write to its stdin. My approach thus far has been to open it in a pipe, but I have not been able to advance past this, it dies straight away. That's what I have in mind, I understand this causes a fork to occur. I've written code accordingly for this fork (or so I think). Below is a skeleton of what I want, I just need the system to work. #!/usr/bin/perl use warnings; $| = 1; $pid = open (SSH,"| ssh user\@host"); if(defined($pid)){ if(!$pid){ #child while(<>){ print; } }else{ select SSH; $| = 1; select STDIN; #parent while(<>){ print SSH $_; while(<SSH>){ print; } } close(SSH); } } I know, from what it looks like, I'm trying to recreate "system('ssh user@host')," that is not my end goal, but knowing how to do that would bring me much closer to the end goal. Basically, I need a file handle to an open ssh connection where I can read from it the output and write to it input (not necessarily straight from my program's STDIN, anything I want, variables, yada yada) This includes password input. I know about key pairs, part of the end goal involves making key pairs, but the connection needs to happen regardless of their existence, and if they do not exist it's part of my plan to make them exist.

    Read the article

  • With Maven2, how would I zip a directory in my resources?

    - by Benny
    I'm trying to upgrade my project from using Maven 1 to Maven 2, but I'm having a problem with one of the goals. I have the following Maven 1 goal in my maven.xml file: <goal name="jar:init"> <ant:delete file="${basedir}/src/installpack.zip"/> <ant:zip destfile="${basedir}/src/installpack.zip" basedir="${basedir}/src/installpack" /> <copy todir="${destination.location}"> <fileset dir="${source.location}"> <exclude name="installpack/**"/> </fileset> </copy> </goal> I'm unable to find a way to do this in Maven2, however. Right now, the installpack directory is in the resources directory of my standard Maven2 directory structure, which works well since it just gets copied over. I need it to be zipped though. I found this page on creating ant plugins: http://maven.apache.org/guides/plugin/guide-ant-plugin-development.html. It looks like I could create my own ant plugin to do what I need. I was just wondering if there was a way to do it using only Maven2. Any suggestions would be much appreciated. Thanks, B.J.

    Read the article

  • How do I execute a program using Maven?

    - by Will
    I would like to have a Maven goal trigger the execution of a java class. I'm trying to migrate over a Makefile with the lines: neotest: mvn exec:java -Dexec.mainClass="org.dhappy.test.NeoTraverse" And I would like mvn neotest to produce what make neotest does currently. Neither the exec plugin documentation nor the Maven Ant tasks pages had any sort of straightforward example. Currently, I'm at: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions><execution> <goals><goal>java</goal></goals> </execution></executions> <configuration> <mainClass>org.dhappy.test.NeoTraverse</mainClass> </configuration> </plugin> I don't know how to trigger the plugin from the command line, though.

    Read the article

  • How to get maven gwt 2.0 build working

    - by Pieter Breed
    EDIT: Added some of the output of the mvn -X -e commands at the end My company is developing a GWT application. We've been using maven 2 and GWT 1.7 successfully for quite a while. We recently decided to upgrade to GWT 2.0. We've already updated the eclipse project and we are able to successfully run the application in dev-mode. We are struggling to get the application built using maven though. I'm hoping somebody can tell me what I'm doing wrong here since I'm running out of time on this. The exacty bit of the output that worries me is the 'GWT compilation skipped' message: [INFO] Copying 119 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 704 source files to K:\iCura\assessor\target\classes [INFO] [gwt:compile {execution: default}] [INFO] using GWT jars for specified version 2.0.0 [INFO] establishing classpath list (scope = compile) [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [INFO] [jspc:compile {execution: jspc}] [INFO] Built File: \index.jsp I'm pasting the gwt-maven-plugin section below. If you need anything else please ask. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>1.2</version> <configuration> <localWorkers>1</localWorkers> <warSourceDirectory>${basedir}/war</warSourceDirectory> <logLevel>ALL</logLevel> <module>${cura.assessor.module}</module> <!-- use style OBF for prod --> <style>OBFUSCATED</style> <extraJvmArgs>-Xmx2048m -Xss1024k</extraJvmArgs> <gwtVersion>${version.gwt}</gwtVersion> <disableCastChecking>true</disableCastChecking> <soyc>false</soyc> </configuration> <executions> <execution> <goals> <!-- plugin goals --> <goal>clean</goal> <goal>compile</goal> </goals> </execution> </executions> </plugin> I executed mvn clean install -X -e and this is some of the output that I get: [DEBUG] Configuring mojo 'org.codehaus.mojo:gwt-maven-plugin:1.2:compile' --> [DEBUG] (f) disableCastChecking = true [DEBUG] (f) disableClassMetadata = false [DEBUG] (f) draftCompile = false [DEBUG] (f) enableAssertions = false [DEBUG] (f) extra = K:\iCura\assessor\target\extra [DEBUG] (f) extraJvmArgs = -Xmx2048m -Xss1024k [DEBUG] (f) force = false [DEBUG] (f) gen = K:\iCura\assessor\target\.generated [DEBUG] (f) generateDirectory = K:\iCura\assessor\target\generated-sources\gwt [DEBUG] (f) gwtVersion = 2.0.0 [DEBUG] (f) inplace = false [DEBUG] (f) localRepository = Repository[local|file://K:/iCura/lib] [DEBUG] (f) localWorkers = 1 [DEBUG] (f) logLevel = ALL [DEBUG] (f) module = com.curasoftware.assessor.Assessor [DEBUG] (f) project = MavenProject: com.curasoftware.assessor:assessor:3.5.0.0 @ K:\iCura\assessor\pom.xml [DEBUG] (f) remoteRepositories = [Repository[gwt-maven|http://gwt-maven.googlecode.com/svn/trunk/mavenrepo/], Repository[main-maven|http://www.ibiblio.org/maven2/], Repository[central|http://repo1.maven.org/maven2]] [DEBUG] (f) skip = false [DEBUG] (f) sourceDirectory = K:\iCura\assessor\src [DEBUG] (f) soyc = false [DEBUG] (f) style = OBFUSCATED [DEBUG] (f) treeLogger = false [DEBUG] (f) validateOnly = false [DEBUG] (f) warSourceDirectory = K:\iCura\assessor\war [DEBUG] (f) webappDirectory = K:\iCura\assessor\target\assessor [DEBUG] -- end configuration -- and then this: [DEBUG] SOYC has been disabled by user [DEBUG] GWT module com.curasoftware.assessor.Assessor found in K:\iCura\assessor\src [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [DEBUG] com.curasoftware.assessor:assessor:war:3.5.0.0 (selected for null) [DEBUG] com.curasoftware.dto:dto-gen:jar:3.5.0.0:compile (selected for compile) ... It's finding the correct sourceDirectory. That folders has a 'com' folder within which ultimately is the source of the application organized in folders as per the package structure.

    Read the article

  • OpenVPN vs. IPSec - Pros and Cons, what to use?

    - by jens
    interestingly I have not found any good searchresults when searching for "OpenVPN vs IPSec": I need to setup a private LAN over an untrusted network. And as far as I know, both approaces seem to be valid. But I do not know which one is better. I would be very thankfull If you can list the pro's and con's of both approaches and maybe your suggestions and experiences what to use. Update (Regarding the comment/question): In my concrete case the goal is to have any number of Servers (with static IPs) be connected transparently with each other. But a small portion of "dynamic clients like road warriors" (with dynamic IPs) should also be able to connect. The main goal is however having a "transparent secure network" run top of untrusted network. I am quite a newbie so I do not know how to correctly interprete "1:1 Point to Point Connections" = The solution should support Broadcasts and all that stuff so it is a fully functional network... Thank you very much!! Jens

    Read the article

  • Maven webapp with maven-eclipse-plugin doesn't generate <dependent-module>

    - by codevourer
    I use the eclipse:eclipse goal to generate an Eclipse Project environment. The deployment works fine. The goal creates the var classpath entries for all needed dependencies. With m2eclipse there was the Maven Container which defines an export folder which was WEB-INF/lib for me. But i don't want to rely on m2eclipse so i don't use it anymore. the class path entries which are generated by eclipse:eclipse goal don't have such a export folder. While booting the servlet container with WTP it publishes all resources and classes except the libraries to the context. Whats missing to publish the needed libs, or isn't that possible without m2eclipse integration? Enviroment Eclipse 3.5 JEE Galileo Apache Maven 2.2.1 (r801777; 2009-08-06 21:16:01+0200) Java version: 1.6.0_14 m2eclipse The maven-eclipse-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> <configuration> <projectNameTemplate>someproject-[artifactId]</projectNameTemplate> <useProjectReferences>false</useProjectReferences> <downloadSources>false</downloadSources> <downloadJavadocs>false</downloadJavadocs> <wtpmanifest>true</wtpmanifest> <wtpversion>2.0</wtpversion> <wtpapplicationxml>true</wtpapplicationxml> <wtpContextName>someproject-[artifactId]</wtpContextName> <additionalProjectFacets> <jst.web>2.3</jst.web> </additionalProjectFacets> </configuration> </plugin> The generated files After executing the eclipse:eclipse goal, the dependent-module is not listed in my generated .settings/org.eclipse.wst.common.component, so on server booting i miss the depdencies. This is what i get: <?xml version="1.0" encoding="UTF-8"?> <project-modules id="moduleCoreId" project-version="1.5.0"> <wb-module deploy-name="someproject-core"> <wb-resource deploy-path="/" source-path="src/main/java"/> <wb-resource deploy-path="/" source-path="src/main/webapp"/> <wb-resource deploy-path="/" source-path="src/main/resources"/> </wb-module> </project-modules> Update for upcoming readers The problem here was the deviant packaging-type, if u use maven-eclipse-plugin please validate the use of <packaging>war</packaging> or ear. The following problems are marked of the situations that i have two build-lifecycles in one maven pom.

    Read the article

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • Understanding an Application based on the OS interaction with a Hypervisor

    - by Dewy
    Following I will ask a few specific questions but I will set the stage first. My goal is to monitor Applications in a very odd place - between the OS and a Hypervisor. If you have comments about this probably unachievable goal please do educate me. One good advice or link can save me days of work. Now to my current attempt. I installed on WinXP a VirtualBox (being open-source) and got a guest OS of latest Ubuntu running within. Where should I go next? Can I set the logs to show all memory/CPU/disk instructions of the guest OS? Thanks, Dewy

    Read the article

  • The maven assembly plugin is not using the finalName for installing with attach=true?

    - by Roland Wiesemann
    I have configured following assembly: <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <executions> <execution> <id>${project.name}-test-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>${project.name}-test</finalName> <filters> <filter>src/assemble/test/distribution.properties</filter> </filters> <descriptors> <descriptor>src/assemble/distribution.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>${project.name}-prod-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>${project.name}-prod</finalName> <filters> <filter>src/assemble/prod/distribution.properties</filter> </filters> <descriptors> <descriptor>src/assemble/distribution.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> This produced two zip-files: distribution-prod.zip distribution-test.zip My expectation for the property attach=true is, that the two zip-files are installed with the name as given in property finalName. But the result is, only one file is installed (attached) to the artifact. The maven protocol is: distrib-0.1-SNAPSHOT.zip distrib-0.1-SNAPSHOT.zip The plugin is using the artifact-id instead of property finalName! Is this a bug? The last installation is overwriting the first one. What can i do to install this two files with different names? Thanks for your investigation. Roland

    Read the article

  • maven scm plugin deleting output folder in every execution

    - by Udo Fholl
    Hi all, I need to download from 2 different svn locations to the same output directory. So i configured 2 different executions. But every time it executes a checkout deletes the output directory so it also deletes the already downloaded projects. Here is a sample of my pom.xml: <profiles> <profile> <id>checkout</id> <activation> <property> <name>checkout</name> <value>true</value> </property> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-scm-plugin</artifactId> <version>1.3</version> <configuration> <username>${svn.username}</username> <password>${svn.pass}</password> <checkoutDirectory>${path}</checkoutDirectory> <skipCheckoutIfExists /> </configuration> <executions> <execution> <id>checkout_a</id> <configuration> <connectionUrl>scm:svn:https://host_n/folder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> <execution> <id>checkout_b</id> <configuration> <connectionUrl>scm:svn:https://host_l/anotherfolder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> Is there any way to prevent the executions to delete the folder ${path} ? Thank you. PS: I cant format the pom.xml fragment correctly, sorry!

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • NginX : Route user request to backend

    - by xperator
    The goal is to have NginX webserver act as a very basic & simple load balancer/fail-over. But instead of fetching static files from backend and serving it to user, I just want to route/redirect user request to one of the back end servers. upstream backend { server server1.example.com:80; server server2.example.com:80; server server3.example.com:80; } location / { proxy_pass http://backend; } Instead of : User request (example.com/test.file) NginX LB Backend NginX LB User I want to have : User request (example.com/test.file) NginX LB Backend User Is this even possible with NginX ? If not then How can I achieve this goal. UPDATE 1: Is there a way to use rewrite directive with backend upstream ? UPDATE 2: It's not really necessary to use NginX. I just want to have a direct reply from backend to user.

    Read the article

  • unpack dependency and repack classes using maven?

    - by u123
    I am trying to unpack a maven artifact A and repack it into a new jar file in the maven project B. Unpacking class files from artifact A into: <my.classes.folder>${project.build.directory}/staging</my.classes.folder> works fine using this: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>unpack</id> <phase>generate-resources</phase> <goals> <goal>unpack</goal> </goals> <configuration> <artifactItems> <artifactItem> <groupId>com.test</groupId> <artifactId>mvn-sample</artifactId> <version>1.0.0-SNAPSHOT</version> <type>jar</type> <overWrite>true</overWrite> <outputDirectory>${my.classes.folder}</outputDirectory> <includes>**/*.class,**/*.xml</includes> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> In the same pom I now want to generate an additional jar containing the classes just unpacked: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.4</version> <executions> <execution> <phase>package</phase> <goals> <goal>jar</goal> </goals> <configuration> <classesdirectory>${my.classes.folder}</classesdirectory> <classifier>sample</classifier> </configuration> </execution> </executions> </plugin> A new jar is created but it does not contain the classes from the: ${my.classes.folder} its simply a copy of the default project jar. Any ideas? I have tried to follow this guide: http://jkrishnaraotech.blogspot.dk/2011/06/unpack-remove-some-classes-and-repack.html but its not working.

    Read the article

  • Tricking Linux apps about current time with environment variables

    - by geek
    Sometimes it is possible to trick a Linux app by calling it like this: HOME=/tmp/foo myapp This would make myapp think /tmp/foo is the home directory, it won't try to get the user id, find its home directory via getpwent(). This is useful when myapp must be forced to dump some of its config files into a non-standard location different than ~. A similar trick can be done like this: LANG=foo LC_ALL=bar myapp This is useful when myapp needs to be called once with a different locale without having to make the change persistent by using the export bash built-in or even modify stuff in /etc/profile. Is it possible to pull the same trick with time and date? The goal is to make an app use another time than the system ones. The final goal - to make timestamps that appear in logs/commit messages not being tied to the system time.

    Read the article

  • NuGet, ASP.Net MVC and WebMatrix - DB Coders Cafe - March 1st, 2011 With Sam Abraham

    - by Sam Abraham
    I am scheduled to share on NuGet (http://nuget.codeplex.com/) at the Deerfield Beach Coder’s Café on March 1st, 2011. My goal for this talk is to present demos and content covering how to leverage this new neat utility to easily “package” .Net-based binaries or tools and share them with others, who in-turn, can just as easy reference and readily use that same package in their Visual Studio 2010 .Net projects. Scott Hanselman has recently blogged in great detail on creating NuGet packages. For hosting a local NuGet package repository, Jon Galloway has a nice article update with a complete PowerShell script to simplify downloading the default feed packages which can be accessed here. Information on my upcoming talk can be found at: http://www.fladotnet.com/Reg.aspx?EventID=514 The following is a brief abstract of the talk: NuGet (formerly known as NuPack) is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. NuGet is a member of the ASP.NET Gallery in the Outercurve Foundation. In this session we will: Discuss the concept, vision and goal behind NuGet See NuGet in action within an ASP.Net MVC project Look at the NuGet integration in Microsoft WebMatrix Create a NuGet package for our demo library Explore the NuGet Project site Configure a NuGet package feed for a local network Solicit attendees input and feedback on the tool  Look forward to meeting you all there. --Sam

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • OracleWebLogic YouTube Channel

    - by Jeffrey West
      The WebLogic Product Management Team has been working on content for an Oracle WebLogic YouTube channel to host demos and overview of WebLogic features.  The goal is to provide short educational overviews and demos of new, useful, or 'hidden gem' WLS features that may be underutilized.    We currently have 26 videos including: Coherence Server Lifecycle Management with WebLogic Server (James Bayer) WebLogic Server JRockit Mission Control Experimental Plugin (James Bayer) WebLogic Server Virtual Edition Overview and Deployment Oracle Virtual Assembly Builder (Mark Prichard) Migrating Applications from OC4J 10g to WebLogic Server with Smart Upgrade (Mark Prichard) WebLogic Server Java EE 6 Web Profile Demo (Steve Button) WebLogic Server with Maven and Eclipse (Steve Button) Advanced JMS Features: Store and Forward, Unit of Order and Unit of Work (Jeff West) WebLogic Scripting Tool (WLST) Recording, editing and Playback (Jeff West) Special thanks to Steve, Mark and James for creating quality content to help educate our community and promote WebLogic Server!  The Product Management Team will be making ongoing updates to the content.  We really do want people to give us feedback on what they want to see with regard to WebLogic.  Whether its how you achieve a certain architectural goal with WLS or a demonstration and sample code for a feature - All requests related to WLS are welcome! You can find the channel here: http://www.YouTube.com/OracleWebLogic.  Please comment on the Channel or our WebLogic Server blog to let us know what you think.  Thanks!

    Read the article

  • How to implement Facebook leaderboard game for mobile?

    - by TrueGrime
    I am in the final stages of developing an indie C# mobile game that I will deploy to iPhones and Androids using Mono. I wish to add push notifications and a Facebook leader board feature into the game, such that the user will be presented with a list of all his/her friends that are playing the game and their weekly scores in the main screen. I have ZERO experience with web development/networking/databases. I recently started researching the field and formed a basic understanding. Facebook has SDKs in PHP, JavaScript, obj-c and java, tho reading the documentation examples still feels cryptic to me since it involves web/server tech. After researching some more, I understood that my options for server side are basically PHP or ASP.Net. It seems that ASP.Net is more favorable in my case since I am already proficient with C# (but there is no ASP.net SDK from Facebook... I am not sure if this implies that I cant interact with Facebook using ASP.Net). On the down side some have mentioned higher costs for ASP.Net, tho I havent looked further into that aspect yet. I also understood that JavaScript is client side technology. I started going through tutorials of ASP.Net, I was thinking that ASP.Net is a purely server side management language, but it started feeling more like WPF as those tutorials started getting into very lengthy discussions about creating website interfaces and styles. I am not interested in that, I just want to have a web server which my app can somehow communicate with and get friends/scores. Am I learning the right technologies for my goal? should I be learning something else? I am posting this in hopes that someone who knows this field well can see through my problem and help guide me, otherwise I could spend months studying something that might not be the right solution to my goal. Thanks.

    Read the article

  • SQLAuthority News – Resolution for New Year 2011

    - by pinaldave
    Today is the first day of the year so I want to write something very light. Last Year: 2010 Last Year was a blast; really traveled a lot. My family and I went on vacation. There I enjoyed being father, rolling on the floor and playing with my daughter. Here is the list of the countries I visited throughout 2010: Singapore (twice) Malaysia (twice) Sri Lanka (thrice) Nepal (once) United States of America (twice) United Arab Emirates (UAE) (once) My daughter who just completed 1 year on September 1, 2010 has so far visited three countries: Singapore, Malaysia and Sri Lanka, where I have done lots of community activities. The list containing all my activities can be found at Pinal Dave’s Community Events. I have written nearly 380 blog posts last year. It would be difficult for me to pick a few. However, I keep a running list of all of my articles over here: All Articles on SQLAuthority.com. I have so far received more than 10,000 email questions during the year and consequently I have done my best to answer most of them. I strongly believe if one would Search SQLAuthority.com blog, they would have found the answer quickly. The best part of 2010 for me was working on SQL Server Health Check and SQL Server Performance Tuning. This Year: 2011 This year, I came up with two simple goals: 1. Personal Goal: Reduce Weight 2. Professional Goal: Stay busy for the entire year with SQL Server Performance Tuning Projects. (Currently January 2011 is booked with performance tuning projects and 40 other days are already booked throughout the year). Future The future is something one cannot exactly guess and one cannot see. I just want to wish all of you the very best for this coming New Year. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • Which Open Source Licenses can address concerns for an Open Source Game Engine?

    - by Chris
    I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http://opensource.org/licenses/category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns: Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >