Search Results

Search found 884 results on 36 pages for 'workspace'.

Page 7/36 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How to delete Document Workspace sub folders with Sharepoint web services?

    - by pclem12
    I'm trying to delete a sub folder in a dws. This is the code I've got: SharepointDocs.DwsSoapClient dws = new SharepointDocs.DwsSoapClient(); dws.DeleteFolderCompleted += dws_DeleteFolderCompleted; dws.DeleteFolderAsync(DWSname+'/'+folderName); In the call back for completion I get no error codes only the message "<Results/>" On the msdn site here, it says this empty message signals a success. However, the folder is still there. Any ideas what could be the problem?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Jenkins and Ant - ant.bat not recognized but env vars are set well

    - by Blue
    I'm trying to set Jenkins to work with Ant but I get the following error: Started by user anonymous Building in workspace C:.jenkins\workspace\CI Demo Checking out a fresh workspace because there's no workspace at C:.jenkins\workspace\CI Demo Cleaning local Directory . Checking out https:///svn/CI_Demo/trunk at revision '2013-10-27T19:34:31.549 +0000' At revision 6 [CI Demo] $ cmd.exe /C '"ant.bat jar && exit %%ERRORLEVEL%%"' 'ant.bat' is not recognized as an internal or external command, operable program or batch file. Build step 'Invoke Ant' marked build as failure Finished: FAILURE however, JAVA_HOME, ANT_HOME and I added the following to "Path": %ANT_HOME%\bin;%JAVA_HOME%\bin And as you can see the command is recognizable when executed in CMD: C:\Users\Administratorjava -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) C:\Users\Administratorant -version Apache Ant(TM) version 1.9.2 compiled on July 8 2013 C:\Users\Administratorant.bat Buildfile: build.xml does not exist! Build failed I would appreciate our help. Thank you, N

    Read the article

  • SSH error: Permission denied, please try again

    - by Kamal
    I am new to ubuntu. Hence please forgive me if the question is too simple. I have a ubuntu server setup using amazon ec2 instance. I need to connect my desktop (which is also a ubuntu machine) to the ubuntu server using SSH. I have installed open-ssh in ubuntu server. I need all systems of my network to connect the ubuntu server using SSH (no need to connect through pem or pub keys). Hence opened SSH port 22 for my static IP in security groups (AWS). My SSHD-CONFIG file is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Through webmin (Command shell), I have created a new user named 'senthil' and added this new user to 'sudo' group. sudo adduser -y senthil sudo adduser senthil sudo I tried to login using this new user 'senthil' in 'webmin'. I was able to login successfully. When I tried to connect ubuntu server from my terminal through SSH, ssh senthil@SERVER_IP It asked me to enter password. After the password entry, it displayed: Permission denied, please try again. On some research I realized that, I need to monitor my server's auth log for this. I got the following error in my auth log (/var/log/auth.log) Jul 2 09:38:07 ip-192-xx-xx-xxx sshd[3037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=MY_CLIENT_IP user=senthil Jul 2 09:38:09 ip-192-xx-xx-xxx sshd[3037]: Failed password for senthil from MY_CLIENT_IP port 39116 ssh2 When I tried to debug using: ssh -v senthil@SERVER_IP OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to SERVER_IP [SERVER_IP] port 22. debug1: Connection established. debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA {SERVER_HOST_KEY} debug1: Host 'SERVER_IP' is known and matches the ECDSA host key. debug1: Found key in {MY-WORKSPACE}/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: password debug1: Next authentication method: password senthil@SERVER_IP's password: debug1: Authentications that can continue: password Permission denied, please try again. senthil@SERVER_IP's password: For password, I have entered the same value which I normally use for 'ubuntu' user. Can anyone please guide me where the issue is and suggest some solution for this issue?

    Read the article

  • jqueryUI: Drag element from dialog and drop onto main page?

    - by Seth Petry-Johnson
    I am trying to create a drag and drop system consisting of a workspace and a "palette". The workspace currently consists of re-orderable list items, and I want the palette to be a floating window from which I can drag items and add them to a specific position on the workspace. I am currently using the jqueryUI "sortable" plugin for the workspace and the jqueryUI "dialog" plugin for the palette. However, I cannot drag something out of the dialog and on to the main page. When I try, the item being dragged disappears as it crosses the boundary of the dialog (which makes sense). What can I change so that items will remain visible as I drag them out of the palette and allow me to drop them onto the main workspace? Alternatively, are there any jquery plugins that offer this sort of drag-n-drop palette as a primary feature?

    Read the article

  • How to create a GUI inside a function in Matlab?

    - by Paperflyer
    Is there a possibility to write a GUI from inside a function? The Problem is: The callback of all GUI-functions work in the global workspace. But functions have their own workspace and can not access variables in the global workspace. Is there a possibility to make the GUI-functions use the workspace of the function? function myvar = myfunc() myvar = true; h_fig = figure; % create a useless button uicontrol( h_fig, 'style', 'pushbutton', ... 'string', 'clickme', ... 'callback', 'myvar = false' ); % wait for the button to be pressed while myvar pause( 0.2 ); end close( h_fig ); disp( 'this will never be displayed' ); end This event-loop will run indefinitely, since the callback will not modify myvar in the function. Instead it will create a new myvar in the global workspace.

    Read the article

  • In Windows 7 is there a way to login from any user account and see the same workspace and be able to use the running programs of another user?

    - by WickedMongoose
    Our group has a number of Test Stands with PCs that are currently being accessed with a single group login. It has been sent from on high that this is not the way to do things for security reasons and we all agree. However. Multiple team members from around the world log into these Test Stands and need to be able to access programs that have been run from what would be different user profiles if we were to no longer have a single common login. Is there a way to have a common workspace such that when different users login, they will be able to see and interact with all running applications as if they were using a common login? Applications that we run link to and monopolize hardware resources connected to the PC and it is time consuming to restart and reload settings every time a new user logs in. Even if the program did not monopolize the hardware many of these programs are resource intensive and require a large portion of each machine's RAM to run, so trying to run the application again when it is already running from multiple user accounts would quickly consume all system resources. Simple Example: I open a chrome browser while logged into our pc. I then logout and another team member remotes in and should be able to see my open browser and be able to interact with it as if he were the one who opened it. Any alternative process flows or solutions from someone who has gone through a similar transition would be appreciated. This is not a request for how to give all users access to the ability to run a program, but it is the request for how to allow all users access to interact with running applications that have been started by other users and need to be interacted with as if the new user started and has control of the application.

    Read the article

  • How to undo a changeset using tf.exe rollback

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,Team Foundation Utilities,TFS2010   Oh no! Did you just check in a changeset in to TFS and realized that you need to roll back the changeset because the changes were suppose to go in a different branch? Or did you just accidently merge a wrong changeset in your release branch? There are several ways to undo the damage, Manual: Yes, we all just hate this word but for the record you could manually rollback the changes. Get Specific version on the branch and chose the changeset prior to the one you checked in. After that check out all the files in the changeset and check them in. During the check in you will receive a conflict. At this point choose ‘Keep local changes’ in the conflict resolution window and check in the files. Automated: Yes, we just love it! TFS comes with a very powerful command line utility ‘tf.exe’ that gives you the ability to rollback the effects of one or more changesets to one or more version-controlled items. This command does not remove the changesets from an item's version history. Instead, this command creates in your workspace a set of pending changes that negate the effects of the changesets that you specify. Syntax tf rollback /toversion:VersionSpec ItemSpec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/login:username,[password]] [/noprompt] tf rollback /changeset:ChangesetFrom~ChangesetTo [ItemSpec] [/recursive] [/lock:none|checkin|checkout] [/version:VersionSpec] [/keepmergehistory] [/noprompt] [/login:username,[password]]   I’ll explain this with an example. Your workspace is at the location C:\myWorkspace You want to rollback changeset # 145621 C:\Workspace\MyBranch>tf.exe rollback /changeset:145621 /recursive How do i rollback/undo a series of changesets? You can also rollback a range of changesets by using the following C:\Workspace\MyBranch>tf.exe rollback /changeset:145601~145621 /recursive This will check out the files in the version control and you should be able to see them in the pending changes. Go on check them in to undo the specific changeset that you just rolled back. Do you completely want to get rid of the changeset from all future merges between the two branches? /KeepMergeHistory: This option has an effect only if one or more of the changesets that you are rolling back include a branch or merge change. Specify this option if you want future merges between the same source and the same target to exclude the changes that you are rolling back. Errors “If you get the message ‘Unable to determine the workspace.’ You may be able to correct this by running ‘tf worksapces /collection:TeamProjectCollectionUrl’” you are in the wrong directory. Make sure that you run the ‘tf rollback’ command from the directory of your workspace.   Status Exit Code Description 0 The operation rolled back all items successfully. 1 The operation rolled back at least one item successfully but could not roll back one or more items. 100 The operation could not roll back any items.   To use the command you must have the Read, Check Out, and Check In permissions set to Allow. So, have you been in a rollback undo situation before?   Share this post :

    Read the article

  • Exporting makefile from Eclipse CDT

    - by Alex Farber
    I have C++ project in the Ubuntu OS, Eclipse CDT. My final goal is to build the project binaries for FreeBSD OS. The first test. I create simple C++ CDT project with main.cpp file: cout << "OK" << endl; and build it. Then I open Terminal window in Release directory: alex@alex-linux:~/workspace/HelloWorld/Release$ ls HelloWorld main.d main.o makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ rm HelloWorld main.d main.o alex@alex-linux:~/workspace/HelloWorld/Release$ ls makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ make Building file: ../main.cpp Invoking: GCC C++ Compiler g++ -O3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" Finished building: ../main.cpp Building target: HelloWorld Invoking: GCC C++ Linker g++ -o"HelloWorld" ./main.o Finished building target: HelloWorld alex@alex-linux:~/workspace/HelloWorld/Release$ ./HelloWorld OK alex@alex-linux:~/workspace/HelloWorld/Release$ So far, so good. Now I copy the whole project tree to FreeBSD and trying to build it: $ cd /home/alex/project $ ls main.cpp release $ cd release $ ls makefile objects.mk sources.mk subdir.mk $ make "makefile", line 5: Need an operator "makefile", line 10: Need an operator "makefile", line 11: Need an operator "makefile", line 12: Need an operator CDT-generated makefile doesn't work. This is makefile beginning: $ Automatically-generated file. Do not edit! -include ../makefile.init RM := rm -rf $ All of the sources participating in the build are defined here -include sources.mk -include subdir.mk -include objects.mk ... Line 5 is -include ../makefile.init. Really, there is no such file. But it works by some way on Ubuntu computer. What is the trick, how can I build this? BTW, manually written makefile works: all: g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" g++ -o"HelloWorld" ./main.o Note: $ in makefile is actually #, I replaced it because # creates formatting problems inside of stackoverflow pre block.

    Read the article

  • Android ndk-build command does nothing

    - by James
    I have a similar question to that posted here: Android NDK: why ndk-build doesn't generate .so file and a new libs folder in Eclipse? ...though I am running Windows 7, not Mac os. Essentially the ndk-build command is run, gives no error but doesn't create an .so file (also, since I'm on windows this should create a .dll and not an .so?). I tried running the command from the root, jni, src folders etc. but got the same result; cmd just returns to the prompter after a few seconds. I ran it again from the jni folder with NDK_LOG=1 parameter to see what was happening. Here is a portion of the transcript of the log results after running ndk-build in the jni folder (after it successfully identified the platform, etc.)... Android NDK: Looking for jni/Android.mk in /workspace/NdkFooActivity/jni Android NDK: Looking for jni/Android.mk in /workspace/NdkFooActivity Android NDK: Found it ! Android NDK: Found project path: /workspace/NdkFooActivity Android NDK: Ouput path: /workspace/NdkFooActivity/obj Android NDK: Parsing /cygdrive/c/android-ndk-r8/build/core/default-application.mk Android NDK: Found APP_PLATFORM=android-15 in /workspace/NdkFooActivity/project.properties Android NDK: Application local targets unknown platform 'android-15' Android NDK: Switching to android-14 Android NDK: Using build script /workspace/NdkFooActivity/jni/Android.mk Android NDK: Application 'local' is not debuggable Android NDK: Selecting release optimization mode (app is not debuggable) Android NDK: Adding import directory: /cygdrive/c/android-ndk-r8/sources Android NDK: Building application 'local' for ABI 'armeabi' Android NDK: Using target toolchain 'arm-linux-androideabi-4.4.3' for 'armeabi' ABI Android NDK: Looking for imported module with tag 'cxx-stl/system' Android NDK: Probing /cygdrive/c/android-ndk-r8/sources/cxx-stl/system/Android.mk Android NDK: Found in /cygdrive/c/android-ndk-r8/sources/cxx-stl/system Android NDK: Cygwin dependency file conversion script: ...after which point it just runs the script mentioned in the last line, then terminates. Any ideas? Thanks!

    Read the article

  • Workflow Foundation (WF) -- Why does setting a DependencyProperty to a COM object using SetValue() t

    - by stakx
    Assume that I have a .NET Workflow Foundation (WF) SequenceActivity class with the following property: public IWorkspace Workspace { get; set; } // ^^^^^^^^^^ // important: this is a COM interface type! public static DependencyProperty WorkspaceProperty = DependencyProperty.Register( "Workspace", typeof(IWorkspace), typeof(FoobarActivity)); // <-- this activity class This activity executes some code that sets both of the above like this: this.Workspace = ...; // exact code not relevant; property set to a COM object SetValue(WorkspaceProperty, this.Workspace); The last line (which makes the call to SetValue) results in an ArgumentException for the second parameter (having the value of this.Workspace): Type […].IWorkspace of dependency property Workspace does not match the value's type System.__ComObject.                                           (translated from German, the English exception text might differ slightly) As soon as I register the dependency property with typeof(object) instead of typeof(IWorkspace) as the second parameter, the code executes just fine. However, that would result in the possibility to assign just about any value to the dependency property, and I do not want that. It seems to me that WF dependency properties don't work for COM interop objects.Does anyone have a solution to this?

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane

    - by Christopher Karl Chan
    @page { margin: 0.79in } P { margin-bottom: 0.08in } --Focus So this blog entry will focus on BPM Swimlane roles and users from a ADF context. So we have an ADF Task Details Form and we are in the process of making it richer and dynamic in functionality. A common requirement could be to dynamically show different areas based on the user logged into the workspace. Perhaps even we want to know even what swim-lane role the user belongs to. It is is a little bit harder to achieve then one thinks unless you know the trick. The Challenge The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace. So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do? The Magic First add the BC4J Security library to your view project. Then Restart JDeveloper. Now find the web.xml file in the view project of your ADF Task Details Application and look for the JpsFilter section. Then add in the following section. <init-param> <param-name>application.name</param-name> <param-value>OracleBPMProcessRolesApp</param-value></init-param> This will link your application to that of the BPM workspace. Then in your dynamic part of your ADF form you can now check whether the user logged into the BPM Workspace belongs in a BPM swim-lane in any BPM process. The best way to do this is by using expression language in the JSF page itself. Here I am simply changing the rendered flag to either true or false and thereby hiding or showing a section. Perhaps you are re-using the same form for a task in an approver swim-lane and ordinary user swimlane. So we only want the approver to see this field. So call the built in function to check if the user is a member of the BPM swim-lane role. The name of the role must be of the syntax BPMProject.RoleName <af:outputText value="This will only be rendered when the user is part of the BPM Swimlane Role rendered="#{securityContext.userInRole['BPMProjectName.Rolename']}"/> Now you must redeploy your ADF Task Form project Now (in the image above) the text will ONLY get rendered in the Task Details Form only if the user logged into the workspace is a member of the swimlane Unsecure of the BPM project SimpleTask

    Read the article

  • git repository sync between computers, when moving around?

    - by Johan
    Hi Let's say that I have a desktop pc and a laptop, and sometimes I work on the desktop and sometimes I work on the laptop. What is the easiest way to move a git repository back and forth? I want the git repositories to be identical, so that I can continue where I left of at the other computer. I would like to make sure that I have the same branches and tags on both of the computers. Thanks Johan Note: I know how to do this with SubVersion, but I'm curious on how this would work with git. If it is easier, I can use a third pc as classical server that the two pc:s can sync against. Note: Both computers are running Linux. Update: So let's try XANI:s idea with a bare git repo on a server, and the push command syntax from KingCrunch. In this example there is two clients and one server. So let's create the server part first. ssh user@server mkdir -p ~/git_test/workspace cd ~/git_test/workspace git --bare init So then from one of the other computers I try to get a copy of the repo with clone: git clone user@server:~/git_test/workspace/ Initialized empty Git repository in /home/user/git_test/repo1/workspace/.git/ warning: You appear to have cloned an empty repository. Then go into that repo and add a file: cd workspace/ echo "test1" > testfile1.txt git add testfile1.txt git commit testfile1.txt -m "Added file testfile1.txt" git push origin master Now the server is updated with testfile1.txt. Anyway, let's see if we can get this file from the other computer. mkdir -p ~/git_test/repo2 cd ~/git_test/repo2 git clone user@server:~/git_test/workspace/ cd workspace/ git pull And now we can see the testfile. At this point we can edit it with some more content and update the server again. echo "test2" >> testfile1.txt git add testfile1.txt git commit -m "Test2" git push origin master Then we go back to the first client and do a git pull to see the updated file. And now I can move back and forth between the two computers, and add a third if I like to.

    Read the article

  • Configuring Jenkins for running with BitBucket

    - by Claus
    I'm trying to setup Jenkins on my mac mini in order to pull my iOS project source code from BitBucket and build it automatically. I've already gone through the major well know problems generating the ssh keys,uploading them in BitBucket,performing an ssh connection by console for adding the host to the well know list (you can find all my adventure here and here). Now,there are 3 user in my system: A,B and Shared. When I installed Jenkins it automatically placed itself in Shared, but I generated the ssh keys with the user A. So just to be clear In the A home directory there is an .ssh directory with public and private keys. When I try to run by Jenkins job I get this error message: Started by user anonymous Building in workspace /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace - hudson.remoting.LocalChannel@625cb0bb Using strategy: Default Cloning the remote Git repository Cloning repository [email protected]:myuser/myproject.git git --version git version 1.8.0 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:myuser/myproject.git hudson.plugins.git.GitException: Could not clone [email protected]:myuser/myproject.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:271) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1036) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.plugins.git.GitException: Command "/usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace" returned status code 128: stdout: Cloning into '/Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace'... stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:885) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:40) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:267) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:246) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:246) ... 14 more Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1048) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) As you can see it fails when Hudson try to run the GIT command. The odd things is that if I try to run /usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace In my console, it works fine (after fixing a small problem relative the folder write permission with chmod) I found a post reporting a similar error which names a number of possible options but I'm not sure how to perform correctly these operations on my console. It looks like Jenkins is trying to run a command with a user which doesn't have permission to retrieve the appropriate keys from my .ssh directory.Not really sure.Maybe this output can help: MacMini:~ myuser$ ps axu | grep "/jenkins" myuser 11660 0.0 4.6 2918124 97096 ?? S 6:59pm 1:05.63 /usr/bin/java -jar /Users/myuser/Library/Caches/org.jenkins-ci.jenkins/jenkins.war jenkins 9896 0.0 9.0 2939824 188552 ?? Ss 4:06pm 17:55.91 /usr/bin/java -jar /Applications/Jenkins/jenkins.war myuser 11930 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep /jenkins MacMini:~ myuser$ ps axu | grep tomcat myuser 11932 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep tomcat MacMini:~ myuser$ I really hope to fix this problem, because I would like to write a very detailed tutorial with all the information I found disseminated around the web.

    Read the article

  • PHP, ANT and virtualhosts

    - by dbasch
    Hi all, I use the following standard folder structure with my projects: workspace myproject conf development.properties production.properties src build.xml build.properties build myproject Unfortunately, working with scripted languages nullifies the concept of separating the "workspace" from the "build". In my development environment, I use a virtual-host for each project. The virtual-host for a project is configured during the "deploytodevelopment" ANT task. Which method would you recommend for integrating PHP into my build process? Change the virtual-hosts setup to point to the workspace/myproject/src folder. Edit the PHP in the workspace/myproject/src folder. or Check out another working copy of the myproject/src folder to the build/myproject folder. Change the virtual-hosts setup to point to the build/myproject folder. Edit the PHP in the build/myproject folder.

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • Gedit opens on all workspaces?

    - by debug
    Im using Fedora 10 with XFCE. Every time I open Gedit, it opens a instance on every workspace. The circle icon on my window border is selected (meaning, enabled on all workspaces). When I disable it to appear on one workspace, then close and restart, Gedit opens normal (current workspace). Question is: How to keep this current configuration? If I restart my system, and start up Gedit, it appears on all workspaces again.

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • Gnome Terminal intercepts ctrl-F1

    - by frank
    Gnome Terminal does not pass on to applications the keypress ctrl-F1. It's an official bug: https://bugs.launchpad.net/ubuntu/+source/gnome-terminal/+bug/932940 The bug is marked Feb. 2012 but lives on in serendipity since 2009. The bug report is not even complete since shift-ctrl-F1 is also affected. However, I noticed that those two keys are the default keys for switch-to-workspace-1 and move-to-workspace-1. So I disabled them. Zero, zippo, zilch: Gnome Terminal would still swallow the keys. Next, I assigned to those two workspace functions totally different keys. The new keybindings did work, Gnome Terminal would still swallow ctrl-F1 and shift-ctrl-F1. Where are the default workspace keybindings stored? [Not in a xml-file.]

    Read the article

  • "ant" is not recognized as command in Windows

    - by user1294663
    This is my first time developing Android applications. I'm developing an Android app on Eclipse on Windows 7. I would like to run the Android app from the Windows 7 command line interface. I have my Android device connected to the PC. The workspace directory that I use to store the Android project is C:\Users\Guest\Desktop\Software Applications Development\Java\Android Moblie Applications Projects\Eclipse Indigo for Java EE x64-bit\project workspace I opened the command line interface and I changed the working directory to the Android workspace directory. cd C:\Users\Guest\Desktop\Software Applications Development\Java\Android Moblie Applications Projects\Eclipse Indigo for Java EE x64-bit\project workspace I included Android sdk platform tools directory into the PATH environment variable. c:\Users\admin\Android-sdks\platform-tools Then I entered this into the Windows 7 command line interface: ant debug I have this error message on the cmd: ant is not recognised as an internal or external command, operatable program or batch file. What is the solution to this problem?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >