Search Results

Search found 1018 results on 41 pages for 'workspace switcher'.

Page 11/41 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Trying to run Selenium tests using Hudson - "Error: no display specified"

    - by tputkonen
    I'm trying to get Selenium tests to work when they are executed by Hudson, but I have not been successful so far. Hudson is running on Ubuntu, and Selenium is unable to open display. Command I use for launching the build is: mvn clean selenium:xvfb install error log: [INFO] [selenium:xvfb {execution: default-cli}] [INFO] Starting Xvfb... [INFO] Using display: :20 [INFO] Using Xauthority file: /tmp/Xvfb4467650583214148352.Xauthority Deleting: /tmp/Xvfb4467650583214148352.Xauthority xauth: creating new authority file /tmp/Xvfb4467650583214148352.Xauthority Created dir: /var/lib/hudson/jobs/Selenium/workspace/selenium/target/selenium Launching Xvfb Waiting for Xvfb... [INFO] Redirecting output to: /var/lib/hudson/jobs/Selenium/workspace/selenium/target/selenium/xvfb.log Xvfb started ... [INFO] [selenium:start-server {execution: start}] Launching Selenium Server Waiting for Selenium Server... [INFO] Including display properties from: /var/lib/hudson/jobs/Selenium/workspace/selenium/target/selenium/display.properties [INFO] Redirecting output to: /var/lib/hudson/jobs/Selenium/workspace/selenium/target/selenium/server.log [INFO] User extensions: /var/lib/hudson/jobs/Selenium/workspace/selenium/target/selenium/user-extensions.js Selenium Server started [INFO] [selenium:selenese {execution: run-selenium}] [INFO] Results will go to: /var/lib/hudson/jobs/Selenium/workspace/selenium/target/results-firefox-suite.html ... <~30 seconds pause> ... Error: no display specified ... pom.xml: <groupId>org.codehaus.mojo</groupId> <artifactId>selenium-maven-plugin</artifactId> <version>1.0.1</version> <executions> <execution> <id>start</id> <phase>pre-integration-test</phase> <goals> <goal>start-server</goal> </goals> <configuration> <logOutput>true</logOutput> <background>true</background> <port>5123</port> </configuration> </execution> <execution> <id>run-selenium</id> <phase>integration-test</phase> <goals> <goal>selenese</goal> </goals> </execution> <execution> <id>stop</id> <phase>post-integration-test</phase> <goals> <goal>stop-server</goal> </goals> </execution> </executions> <configuration> <browser>*firefox</browser> <suite>src/test/selenium/suite.html</suite> <startURL>http://localhost:${env.port}</startURL> </configuration> I've also tried to get it working by adding execution for xvfb, but also it failed.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • android sdk main.out.xml parsing error?

    - by mobibob
    I just started a new Android project, "WeekendStudy" to continue learning Android development and I got stumped compiling the default 'hello weekendstudy' compile / run. I think that I missed a step in configuration and setup, but I am at a loss to find out where. I have an AVD configured, set and launched. When I press 'run', the SDK is building a file main.out.xml and then fails as this: [2010-03-06 09:46:47 - WeekendStudy]Error in an XML file: aborting build. [2010-03-06 09:46:48 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:46:48 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:46:48 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found [2010-03-06 09:48:16 - WeekendStudy]Error in an XML file: aborting build. [2010-03-06 09:48:16 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:48:16 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:48:16 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found [2010-03-06 09:55:29 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:55:29 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:55:29 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found [2010-03-06 09:55:49 - WeekendStudy]Error in an XML file: aborting build. [2010-03-06 09:55:49 - WeekendStudy]res/layout/main.xml:0: error: Resource entry main is already defined. [2010-03-06 09:55:49 - WeekendStudy]res/layout/main.out.xml:0: Originally defined here. [2010-03-06 09:55:49 - WeekendStudy]/Users/mobibob/Projects/workspace-weekend/WeekendStudy/res/layout/main.out.xml:1: error: Error parsing XML: no element found

    Read the article

  • php symfony and apache2 - inst't serving .phtml

    - by pmoniq
    Hi, I have a problem with apache2 settings (Ubuntu system). I would like to run symfony project on my localhost but instead of serving .phtml files, browser is trying to download files. this is my file .host: 127.0.0.3 test this is apache2/sites-available/default file < VirtualHost 127.0.0.3:80 ServerName test DocumentRoot "/home/m/Pr/workspace/php/test/web" DirectoryIndex frontend_dev.php < Directory "/home/m/Pr/workspace/php/test/web" AllowOverride All Allow from All Alias /sf /home/m/Pr/workspace/php/test/lib/vendor/symfony/data/web/sf < Directory "/home/m/Pr/workspace/php/test/lib/vendor/symfony/data/web/sf" AllowOverride All Allow from All </Directory> and this is .htaccess in /test RewriteEngine On RewriteRule ^(.*)$ /web/$1 Options +FollowSymLinks +ExecCGI AddHandler application/x-httpd-php5 .php .phtml and this is .htaccess in /test/web Options +FollowSymLinks +ExecCGI RewriteEngine On # uncomment the following line, if you are having trouble # getting no_script_name to work RewriteBase / # we skip all files with .something #RewriteCond %{REQUEST_URI} ..+$ #RewriteCond %{REQUEST_URI} !.html$ #RewriteRule .* - [L] # we check if the .html version is here (caching) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.]+)$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # no, so we redirect to our front web controller RewriteRule ^(.*)$ index.php [QSA,L] Another problem is I think apache don't read .htaccess files. What am i doing wrong? Maybe I forgot about something? Please, help me becouse i have no idea.

    Read the article

  • Can't get MySQL source query to work using Python mysqldb module

    - by Chris
    I have the following lines of code: sql = "source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql" cursor.execute (sql) When I execute my program, I get the following error: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'source C:\My Dropbox\workspace\projects\hosted_inv\create_site_db.sql' at line 1 Now I can copy and past the following into mysql as a query: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql And it works perfect. When I check the query log for the query executed by my script, it shows that my query was the following: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql However, when I manually paste it in and execute, the entire create_site_db.sql gets expanded in the query log and it shows all the sql queries in that file. Am I missing something here on how mysqldb does queries? Am I running into a limitation. My goal is to run a sql script to create the schema structure, but I don't want to have to call mysql in a shell process to source the sql file. Any thoughts? Thanks!

    Read the article

  • Eclipse JUnit Plugin Test very slow to re-execute Test Suite on Windows

    - by soundasleepful
    I'm having an odd, and stressing, problem with running a large JUnit Plugin test suite in Eclipse. When I try to re-run a JUnit plugin suite that has just been executed, Eclipse hangs for quite some time before it eventually wakes up and launches. It can take up to 5 minutes sometimes, and increases with the size of the suite. Visually, it appears as a GC cleanup, except that I have plenty of GC space available (400 MB freely allocated). The size of the workspace that is has to delete is well under 1 GB, and there are not too many files - definitely less than 20,000. While I was waiting for a new run to start, I decided to manually kill explorer.exe to see if it had any effect. Surprisingly, Eclipse instantly fell out of its freeze and ran as normal. This makes me think that Windows is somehow interfering with the deletion of these workspace files. They're not being put into the Recycle Bin though. The workspace is in C: which I think is out of the range of any workspace/domain stuff. Any ideas?

    Read the article

  • Windows 7 DWM weirdness

    - by Max
    I'm looking to write a FOSS "Alt+Tab" replacement (window switcher) for Windows, since there are a few features I feel it's (still) lacking; but I'm noticing two quirks I can't seem to fix: #1. (Somewhat Unrelated) In the default Windows 7 window switcher, one computer allows left clicking on a thumbnail to focus the window; however on another, similarly specced computer, I have to use a right click. The only difference between these two fresh installs is the theme. Any ideas? #2. (Directly Related) In both the default Windows 7 window switcher and the DWM API output, minimized windows often have no thumbnail, and instead show only the taskbar. This has been a long running problem with the Windows API, and in the past I've seen the popular recommendation being "restore (un-minimize) the window, take a screenshot, then re-minimize" - but this is sloppy and causes flickering, etc. Has anyone done this successfully using the newer DWM API? If sharing code, I'd prefer C# syntax, but VB.NET will do as well. Thanks!

    Read the article

  • django: cannot import settings, cannot login to admin, cannot change admin password

    - by xpanta
    Hi, It seems that I am completely lost here. Yesterday I noticed that I cannot login to the admin panel (don't use it much, so it's been some weeks since last login). I thought that I might have changed the admin password and now I can't remember it (though I doubt it). I tried django-admin.py changepassword (using django 1.2.1) but it said that 'changepassword' is unknown command (I have all the necessary imports in my settings.py. Admin interface used to work ok). Then I gave a django-admin.py validate. Then the hell begun. django-admin.py validate gave me this error: Error: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined. I then gave a set DJANGO_SETTINGS_MODULE=myproject.settings and then again a django-admin.py validate This is what I get now: Error: Could not import settings 'myproject.settings' (Is it on sys.path? Does it have syntax errors?): No module named myproject.settings and now I am lost. I tried django console and sys.path.append('c:\workspace') or sys.append('c:\workspace\myproject') but still get the same errors. I use windows 7 and my project dir is c:\workspace. I don't use a PYTHONPATH variable (although I tried setting it temporarily to C:\workspace but I still get the same error). I don't use Apache, just the django development server. What am I doing wrong? My web page works fine. I think that the fact that I can't login as admin is related to the previous import error, no? PS: I also tried this: http://coderseye.com/2007/howto-reset-the-admin-password-in-django.html but still I couldn't change admin password for some reason. Although I could create another admin user (with which I couldn't login).

    Read the article

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • How do I create a new project in TFS from an existing project (breaking history)?

    - by Lindsay
    My team is taking over a project from a previous team. We use a different TFS server than the original team, and we are also not interested in keeping the history of the project because we are accepting the latest version of the code as the beginning of our history with the project. Branching is not an option since we want to start our history from the current version of the code. We just want a fresh project with the existing code. I have not been able to create the new project from the old code successfully. I keep getting an error: "Source control cannot add the solution: Solution would span multiple workspaces" My process for attempting the new project creation: Create a workspace for the previous team's version of the code. Get latest version of that code into local mapped workspace directory Open the solution. Unbind all projects and solution. Close solution. Create a workspace for the new version of the code on our TFS server. Copy the unbound code on my local box to the new local workspace mapped folder. Open the solution from the new directory. "Add to source control" from the new solution. Then I get the error. I have tried removing the TFS security files out of the code directories in the unbound version and tried changing source control instead of adding to source control (but it just binds back to the original instead of letting me bind to the new). Is there any other way to do this besides recreating the solution/projects and adding back all the files and references? It doesn't seem like it should be this difficult... Any advice much appreciated!

    Read the article

  • Portable Eclipse

    - by Jeach
    I'm trying to port my entire 'workspace' to a USB key (including the Eclipse executable) so that I can carry my work anywhere with me and work off the key directly. My directory hierarchy is similar to this: /workspace/eclipse - Where my current eclipse binary is stored /workspace/codebase - Where I keep the root of all my eclipse projects /workspace/resources - Where I keep all project files (images, docs, libs, etc.) It all works perfectly fine on one system. But when I change over to another system, the USB key gets mounted on another drive. For example, on my laptop, I get 'E:\', on my PC, I get 'K:\' and at work I get 'F:\', etc, etc. This means that because Eclipse (for 'some' reason) seems to only use full path names (including driver letters) in every single one of its configuration files (such as .classpath), nothing ever works when I want to work on another system. I put a 'libs' directory in the base of every project and populate it with its dependent JAR files. Why doesn't it use relative names instead, so that I could specify something like "../../libs/log4j.jar"? Anyone know how to fix this problem? Does anyone know of a workaround for this? For some reason, I really doubt I'm the first developer to do this! Thanks for your help and any suggestions.

    Read the article

  • rc.local on ubuntu on ec2 will not work

    - by Tampa
    Below are the contents of my rc.local file. When I run sudo /etc/rc.local it works fine. When I boot up and instance. I expect monit to be installed but it is not. I am at a total loss. I usually use rc.local but this is rather confunsing. #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. apt-get -y install monit /etc/init.d/monit stop cd /home/ubuntu/workspace/rtbopsConfig/ git fetch git checkout origin/master rtb_ec2_boot/ec2_boot.py git checkout origin/master config/ cp /home/ubuntu/workspace/rtbopsConfig/config/monit/redis/monitrc /etc/monit/ /usr/bin/python /home/ubuntu/workspace/rtbopsConfig/rtb_ec2_boot/ec2_boot.py >> /home/ubuntu/workspace/ec2_boot.txt 2>&1 /etc/init.d/monit start chkconfig monit on exit 0

    Read the article

  • How can we use Microsoft Groove with peers existing in both secure and unsecured network segments?

    - by MikeHerrera
    We have been instructed to implement a Microsoft Groove workspace. This would normally not be a concern, but the workspace will be utilized by machines which exist in our internal/restricted network as well as from peers from an outside/unknown network. Does there exist a best-practice for such an implementation?... or would this potentially expose the restricted network too broadly?

    Read the article

  • Forbid language switch for certain application(s)

    - by Vasiliy Borovyak
    I have a problem of accidental input language switch. I tried many different settings in order to not do it - change hotkey, install some software (Key Switcher, Keyboard Ninja, Punto Switcher)... But nothing helped. I used to certain hotkey (Ctrl+Shift). Any other hotkey make me even more suffer. The software s found has no feature to avoid accidental switches. What I want is to find a piece of software which can stick "English US" input language to my "Visual Studio". And any Ctrl+Shift pushes inside VS should not lead to language switch. Have any ideas?

    Read the article

  • Clamdscan scans file in 0 seconds

    - by SupaCoco
    I have to run clamav on large files. I was wondering which command was the fastest between clamscan and clamdscan. But it seems that clamdscan is not working properly: it scans file larger than 1 GB. Could you guys help me find why the heck clamdscan isn't working ? Between clamscan and clamdscan which one is less resource consuming ? I run ClamAV 0.97.8/18037 on Ubuntu 12.04.3 LTS. Please find below the execution result of both commands: clamscan myfile.zip ----------- SCAN SUMMARY ----------- Known viruses: 2864504 Engine version: 0.97.8 Scanned directories: 0 Scanned files: 1 Infected files: 0 Data scanned: 0.00 MB Data read: 1024.16 MB (ratio 0.00:1) Time: 9.145 sec (0 m 9 s) clamdscan myfile.zip /home/ubuntu/workspace/benchmark/myfile.zip: OK ----------- SCAN SUMMARY ----------- Infected files: 0 Time: 0.000 sec (0 m 0 s) And here are the clamav log file: Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 4 Wed Oct 30 10:26:32 2013 -> Got new connection, FD 9 Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 5 Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 5 seconds Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 9 Wed Oct 30 10:26:32 2013 -> got command CONTSCAN /home/ubuntu/workspace/benchmark/myfile.zip (51, 7), argument: /home/ubuntu/workspace/benchmark/myfile.zip Wed Oct 30 10:26:32 2013 -> mode -> MODE_WAITREPLY Wed Oct 30 10:26:32 2013 -> Breaking command loop, mode is no longer MODE_COMMAND Wed Oct 30 10:26:32 2013 -> Consumed entire command Wed Oct 30 10:26:32 2013 -> Number of file descriptors polled: 1 fds Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 3600 seconds Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> /home/ubuntu/workspace/benchmark/myfile.zip: OK Wed Oct 30 10:26:32 2013 -> Finished scanthread Wed Oct 30 10:26:32 2013 -> Scanthread: connection shut down (FD 9) Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling

    Read the article

  • VBA for Access 2003 - DDL help with creating access file: setting the Autonumber data type

    - by Justin
    So I have the below VB that creates an access file in the default workspace, creates a table, create some fields in that table...just need to know the syntax for setting the first data type/field to autonumber...GUID, Counter, etc will not work as in Access SQL ' error handling usually goes here dim ws as workspace dim dbExample as database dim tblMain as TableDef dim fldMain as Field dim idxMain as Index set ws = workspace(0) set dbExample = ws.CreateDatabase('string file path') set tblMain = dbExample.CreateTableDef("tblMain") set fldMain = tblMain.CreateField("ID", 'right here I do not know what to substitute for dbInteger to get the autonumber type to work ) tblMain.Fields.Append fldMain etc to create other fields and indexes so in this line: set fldMain = tblMain.CreateField("ID", dbInteger) i need to replace the dbInteger with something that VB reconizes as the autonumber property. i have tried GUID, Counter, Autonumber, AutoIncrement....unfortunately none of these work anyone know the syntax I am missing here? Thanks, Justin

    Read the article

  • How do I bring Set Focus of MDI Child Window using UIAutomation

    - by Scott Ferguson
    We have an old legacy application we need to automate. It uses MDI Windows. We're using UIAutomation and I can succesfully get the appropriate AutomationElement for each MDI Child window. What I cannot do is bring that element into focus. Here is some example code that I tried, that fails: var desktop = AutomationElement.RootElement; var dolphin = desktop.FindFirst(TreeScope.Children, new PropertyCondition(AutomationElement.NameProperty, "Dolphin for Windows", PropertyConditionFlags.IgnoreCase)); dolphin.SetFocus(); var workspace = dolphin.FindFirst(TreeScope.Children, new PropertyCondition(AutomationElement.NameProperty, "Workspace", PropertyConditionFlags.None)); var childWindow = workspace.FindFirst(TreeScope.Children, new PropertyCondition(AutomationElement.NameProperty, "Sharp ")); childWindow.SetFocus(); The last line in this code fails with System.InvalidOperationException Experimenting, I tried finding a control on the childWindow, and calling SetFocus on it. It DID correctly set the focus on the right control, but it did not bring the MDI window to the foreground. Any ideas?

    Read the article

  • Using IvyDE with different workspaces on different branches

    - by James Woods
    I am having problems using IvyDE when I have different workspaces for different branches. I have "Resolve dependencies in workspace" switched on. But everytime I change to a different workspace I have to remember to manually clean the caches out. This is because IvyDE always uses the default cache for resolving dependencies within a workspace, so when switching between workspaces the cache can be polluted by different versions. It would seem that it is impossible to work with two different workspaces at the same time. I cannot find a way to configure the location that IvyDE uses to cache the project dependencies. It does not appear to use the caches defined in the ivysettings.xml

    Read the article

  • File Encoding handling in Eclipse 3.5

    - by Cédric Girard
    Hi, I use Eclipse 3.5 on Windows, with PDT and Subclipse plugins, with both legacy projects using ISO-8859-1 encoding (latin-1), and newers ones wich use UTF-8. I configured my workspace to use UTF-8, and I configured old projects to use latin-1. But every time I open an old project, it use UTF-8. With a workspace using latin-1 by default, I have the same problem with utf-8 projects edited as iso-8859-1. My encoding choice is written in the file .settings/org.eclipse.core.resources.prefs but seems to be never read. The only solution for now is to have a latin1 workspace, and an utf8 one. Any better idea? Regards, Cédric

    Read the article

  • Jenkins and Visual Studio Online integration authentication

    - by user3120361
    right now I am trying to Setup Continuouse Integration - Delivery for a basic WCF Service, which will be hosted on a Microsoft Azure VM. The Project is Version Controlled through Visual Studio Online. So I installed Jenkins (also on the Azure VM), TFS plugin etc. and started the first Test Build: As Server URL I used "[VSO Adress]/DefaultCollection" and for Login purposes my Microsoft Account (I can Access VSO with that). The Problem is, when I run the Build I get the following error: Started by user Developer Building in workspace C:\Program Files (x86)\Jenkins\jobs\test\workspace [workspace] $ "C:\Program Files (x86)\TEE-CLC-11.0.0.1306\TEE-CLC-11.0.0\tf.cmd" workspaces -format:brief -server:[VSO Adress]/DefaultCollection ****" An error occurred: Access denied connecting to TFS server [VSO Adress] (authenticating as har****@*******o.com) FATAL: Executable returned an unexpected result code [100] ERROR: null Finished: FAILURE So my question is, whether it is generally possible to connect Jenkins and VSO that way and if so, which login credentials are needed

    Read the article

  • Eclipse organization: workspaces, working sets, projects, folders, multiple source folders, ....!!!

    - by Ricket
    There is quite a tier of organization in Eclipse. You can have multiple workspaces, each of which can have projects, these projects can be assigned to working sets, and then each project can have source folders... How do you use all this organization? Do you even use it all? Working sets are so hidden that I hardly know what they are; are they commonly used, or are they hidden because they are so uncommonly used? What is even the methodology behind all this? I'd like a good explanation of the recommended way to use all these different organizational layers, because at the moment I basically just have a bunch of random projects in a single workspace (the default %USER%/workspace folder) and it's getting to be quite an alphabetical mess. So in essence: How do you keep your Eclipse workspace(s) organized?

    Read the article

  • how can I open eclipse and see the project view I last looked at?

    - by Stato Machino
    I'm completely mystified. I do not understand why Eclipse does not show me my list of projects! And all the answers I've seen to similar questions do not mention any 'go here and do that' type of answers. I tried 'Import/General/Existing Projects into Workspace/browse-to...' but that told me the project was already in the workspace. But I KNEW THAT! The project menu does not have an 'open project' option!!!! There is no 'open workspace' option. How do people open Eclipse and 'resume working'??? What is the single, dependable, repeatable option for opening the 'project tree' to show my project? [or tell me my expectations that Java people are smarter are way off base!] Thanks, Kimball

    Read the article

  • Demonstration VM BIC2g 2013-10 Partner Edition for Download

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 UPDATED ! The “BIC2g” demo VM (now version 2013-10) is downloadable from our BIC2g Beehive Online Workspace portal for OPN member partners. Compared to the prior version, Bic2g 2013-04, the new Bic2g 2013-10 has: OBIEE was upgraded from 11.1.1.7.0 to 11.1.1.7.1. with BI Mobile Application Designer (BIMAD) added. TimesTen was upgraded from 11.2.2.3.0 to 11.2.2.5.0 ODI Client 11.1.1.7.0 was installed, including a Standalone agent and empty repositories, and the BI Applications 11g ODI Repositories were included (BIAPPS_11g) Informatica and DAC were removed from image There are additional demos for BI-Apps and for Endeca. The compact deployment of EPM is installed and configured, including: Hyperion Foundation, Essbase, Essbase Studio, Essbase Administration Services, Provider Services, Calculation Manager, Planning, Reporting and Analysis, Financial Reporting, Web Analysis, and Workspace. The access details, for OPN member partners only, to get added to the BIC2G Beehive Online Workspace portal are shown from this page @ BI Solutions Engineering Partner Portal. This Oracle Business Intelligence Linux VM virtual appliance (“BIC2g”) was developed to support Oracle OBI, BI-Apps and EPM Hyperion sales and Oracle partners in product demonstrations, training activities and POC activities.  If you do not need BI-Apps, then there is a slightly smaller VM OBI Sample-App you can get from OTN: see @ Oracle BI and EPM Demonstration SampleApp V309 on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • F# and ArcObjects

    - by Marko Apfel
    After having a first look on F# its time to ask: Who could i use F# with ArcObjects. So my first steps was to do something with a feature in a F# interactive session. And these are my first code lines: open ESRI.ArcGIS.esriSystem;;open ESRI.ArcGIS.DataSourcesGDB;;open ESRI.ArcGIS.Geodatabase;;let aoInitialize = new AoInitializeClass();;let status = aoInitialize.Initialize(esriLicenseProductCode.esriLicenseProductCodeArcEditor);;let workspacefactory = new SdeWorkspaceFactoryClass();;// Spatial Database Connection, property "Service": sde:sqlserver:okullet connection = "user=sfg;password=gfs;server=OKUL;database=Praxair;version=SDE.DEFAULT";;let workspace = workspacefactory.OpenFromString(connection, 0);;let featureWorkspace = (box workspace) :?> IFeatureWorkspace;;let featureClass = featureWorkspace.OpenFeatureClass("Praxair.SFG.BP_L_ROHR");;let feature = featureClass.GetFeature(100);;printfn "%A" feature.OID;;

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >