Search Results

Search found 6952 results on 279 pages for 'sharepoint deployment'.

Page 111/279 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

  • How to share the credentials between Webform and Winform?

    - by Daniel
    Hi! Question: I want to make a login form at the Clickonce deployment webpage, and only allow the authenticated users to download the application. and I want the downloaded application to use the same credentials entered at the webpage, without prompting the users to enter the credentials again. Details: I have an application(Windows Client) which needs customized settings for different users. the application is deployed through ClickOnce. Currently, the users are given the ClickOnce webpage URL, then download the application from there. after download and running the application, the application prompts users with a login form. If their credentials are authenticated, the application loads the customized settings from the server's database according to the credentials given. The problem is, any unauthenticated users can download the application if they just know the ClickOnce deployement webpage's URL. Unauthenticated users won't be able to run the application anyways, because the application asks for credentials when started, but I want to prevent the unauthenticated users from downloading the application at all. Am I asking the wrong question maybe? Your help is much appreciated!

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • How to deploy a single webapp with multiple web-modules that may be removed or added individually

    - by Daniel Bleisteiner
    We currently run two separate webapps (WARs) deployed in one single EAR containing additional JARs and settings. To improve our deployment I want to split one of these webapps into different modules that may be build and packaged individually. But I've currently no clue on how to package these modules so that I'm able to add or remove them as desired - at best during runtime. The webapp is getting more and more complex and I'd like to separate some of the functionality into modules. These modules should be packaged as single archives. As long as they contain only classes and resources loaded through code I know how to do this (simple JARs). But how about JSPs? Normally a WAR file contains JSPs or HTML files. I my case it are JSF pages utilizing JBoss Seam and RichFaces. These modules will add classes, resources and JSF pages and other includes to the running webapplication. Is it somehow possible to deploy them as individual archives to serve the same running webapp? We are using Maven for our build and packaging and deploy into JBoss v4.

    Read the article

  • Deploying a WAR to tomcat only using a context descriptor

    - by DanglingElse
    i need to deploy a web application in WAR format to a remote tomcat6 server. The thing is that i don't want to do that the easy way, meaning not just copy/paste the WAR file to /webapps. So the second choice is to create a unique "Context Descriptor" and pointing this out to the WAR file. (Hope i got that right till here) So i have a few questions: Is the WAR file allowed to be anywhere in the file system? Meaning can i copy the WAR file anywhere in the remote file system, except /webapps or any other folder of the tomcat6 installation? Is there an easy way to test whether the deployment was successful or not? Without using any browser or anything, since i'm reaching to the remote server only via SSH and terminal. (I'm thinking ping?) Is it normal that the startup.sh/shutdown.sh don't exist? I'm not the admin of the server and don't know how the tomcat6 is installed. But i'm sure that in my local tomcat installations these files are in /bin and ready to use. I mean you can still start/restart/stop the tomcat etc., but not with the these -standard?- scripts. Thanks a lot.

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Is there a way to specify a per-host deploy_to path with Capistrano?

    - by Chad Johnson
    I have searched and searched and asked a question already and have not received a clear answer. I have the following deploy script (snippet): set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain1.com", "domain2.com" role :app, "domain1.com", "domain2.com" role :db, "domain1.com", :primary => true, :norelease => true role :db, "domain2.com", :norelease => true As you see, I have set deploy_to to a specific path. And, I also have specified multiple web servers. However, each web server should have a different deployment path. I want to be able to run "cap deploy" and deploy to all hosts in one shot. I am NOT trying to deploy to staging and then to production. This is all production. My question is: how exactly do I specify a path per server? I have read the "Roles" documentation for Capistrano, and this is unclear. Can someone please post a deploy file example? I have read the documentation, and it is unclear how to do this. Does anyone know? Am I crazy? Am I thinking of this wrong or something? No answers anywhere online. Nowhere. Nothing. Please, someone help.

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • How do I create relative links for use in a sharepoint site template

    - by Rob
    We are creating a site template that among other things has a Document library with MANY sub folders and a Link list that contains shortcut links to the depths of the DocLib. While making the Site template we are checking the box to 'Include Content.' We are using Sharepoint 2010. No MOSS. Our problem: Once we make a site from the template, the shortcut links don't work. While the first part of the link URL is rewritten, there is a portion of the original site name still buried in the URL. My Question: Is there a way to create a relative links to content inside of the site, so that the the site name isn't included? or is there a variable I can use to represent th current site? or do I have to programmatically 'fix-up' the links after it's created? or some other better option?

    Read the article

  • Feature element repeatedly added with every feature activation/deactivation

    - by ccomet
    This is a very minor behavior when compared with the entire scope, but it is one that I'd like to put a stop to. I have created a very, very simple SharePoint Feature. It has two elements in its manifest: an aspx webpart page, and an elements xml. I'll paraphrase my elements xml, which just adds a module, below. <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <Module Name="Pass" Url="" Path=""> <File Url="pasq.aspx" NavBarHome="True" Type="Ghostable"> <AllUsersWebPart WebPartZoneID="Left" WebPartOrder="0"> <![CDATA[ WARGH THIS PART DOESN'T MATTER FOR THIS QUESTION! ]]> </AllUsersWebPart> </File> </Module> </Elements> Now, on the first time I deploy and activate this feature, it works properly. But if I have to deactivate and then reactivate the feature in order to fix some properties in the webpart, then I find myself with a second webpart on the page. Naturally, each similar cycle will just add more and more. I understand that a new webpart must be created in order to employ the changes I just made, but why is the old webpart still on the page? Can I make this older webpart go away automatically as part of the feature activation/deactivation process without needing to employ a Receiver class?

    Read the article

  • Problems requesting the LDAP: The server is unwilling to process the request.

    - by Flo
    We have written an authentication provider for a SharePoint web application which can requests multiple LDAP directories. One of the LDAP server have to be requested via SSL. So we imported the CA certificate which was used to sign the LDAP server's certificate into the certificate store of the SharePoint server. The following code snippet shows how we authenticate an user. The passed credentials (account, password) belong to the user we want to authenticate. var entry = new DirectoryEntry("LDAP://<ldap-server-address>", "cn=account,ou=sub,o=xyz,c=de", "password", AuthenticationTypes.SecureSocketsLayer); var searcher = new DirectorySearcher(entry); var found = searcher.FindOne(); When the code is processed, the call to searcher.FindOne() throws following exception. System.Runtime.InteropServices.COMException (0x80072035): The server is unwilling to process the request What circumstance can lead to this error? UPDATE: I found some information about the error message. There the problem seems to be the certificate store, as the user has only stored the certificate in the in the user's store and not in the computer's store. Unfortunately we've already stored it there. So could this be still a certificate issue? UPDATE/SOLUTION: Actually the problem is solved. It seems as if the root CA certificate was imported correctly but the error messages the LDAP server responded was caused by an expired user account our customer gave us for testing.

    Read the article

  • Using the sharepoint stsadm import command fails when list exists

    - by 78lro
    When using the stsadm -o import command in sharepoint I am getting an error relating to a list already existing, the import then seems to fail. Should it not handle this scenario of a list already existing on the destination server? I then used the UI to delete the existing list and ran the import again. It then seemed to fail at the same point saying the list exists. In the UI the list appears but when I click it, it reports that the list does not exist - almost as if it's stuck in a half created state, unable to be deleted or created by the import. Any suggestions greatly appreciated.

    Read the article

  • changing user permissions dynamically

    - by ephieste
    I am designing a system on SharePoint. There is a approval list for the items. The members can approve, reject and edit the items. One from approval list has to fill the "assigned to" field in the item while approving it. The user who is added to "assigned to" field should able to edit the content of the item after it is approved. So, how can I give the edit permission to the users after they are added assigned to field of a specific item? The situation is: approval list: A, B ,C (edit, view permission) users: x,y,z .... (no permission, view after approval) items: item1, item2, item3.... items are invisible. A approved the item1 and added X to "assigned to" field. It means This item is under X's responsibility. But X hasn't got edit permission. we can't give edit permission to X for every item. He should edit the items after he is written into the "assigned to" field. How can I create this workflow in SharePoint? Please urgent help needed.

    Read the article

  • Problem with sharepoint search.

    - by Lalit
    Hi, I have created the Contact list. I feed proper data where required in my personal sharepoint site. Bu when I look for the specific name or any key word that present in contact list it shows message : No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. Where is, I am giving proper inputs by following these instructions. What should be problem. Is i nees to make any setting for make my data searchable ?

    Read the article

  • FBA site owner encounter access deny in sharepoint 2007

    - by intangible02
    I created a sharepoint 2007 publishing site first using windows authentication, then extended it to another site using FBA. I created a FBA user and set it as site collection admin as well as top site owner. I also make application pool which the FBA site is running in to run with a user account which is within administrator group. But I encounter access deny error when browsing certain links using this site owner account. Is there other settings I need to configure? I found in the web.config, the impersonation is set to true. How does this affect the access rights?

    Read the article

  • WebClient.UploadFile() throws exception while uploading files to sharepoint

    - by Royson
    In my application i am uploading files to sharepoint 2007. I am using using (WebClient webClient = new WebClient()) { webClient.Credentials = new NetworkCredential(userName, password); webClient.Headers.Add("Content-Type", "application/x-vermeer-urlencoded"); webClient.Headers.Add("X-Vermeer-Content-Type", "application/x-vermeer-urlencoded"); String result = Encoding.UTF8.GetString(webClient.UploadData(webUrl + "/_vti_bin/_vti_aut/author.dll","POST", data.ToArray())); } the code is running successfully..but for some files it throws exception The underlying connection was closed: The connection was closed unexpectedly. at System.Net.WebClient.UploadDataInternal(Uri address, String method, Byte[] data, WebRequest& request) at System.Net.WebClient.UploadData(Uri address, String method, Byte[] data) at System.Net.WebClient.UploadData(String address, String method, Byte[] data) Any Ideas what I have done wrong? I am using VS-2008 2.0

    Read the article

  • Sharepoint SSRS: Unexpected Error Encountered

    - by Nicholas
    We are running an intranet website hosted on Sharepoint 2007, serving reports to users via SSRS. Recently, our users are experiencing error when trying to access certain reports with the error message "An unexpected error has occurred". After trying many things, to cut a long story short we manage to temporarily solve the problem by manually running UPDATE STATISTICS on the tables in the database. However, this error will periodically crop up again and again, running UPDATE STATISTICS immediately solved it. I had suggested to run a daily job in the server to auto run UPDATE STATISTICS every few hours or so but was rejected by my team lead. Is there any other workaround for this error? Why does UPDATE STATISTICS solve the problem, I still not very understand?

    Read the article

  • Where is a SharePoint Community for Webparts?

    - by elhombre
    Hi all I am searching a SharePoint Server 2007 Webpart which can do following * change password * lost password recovery (e-mail) * change password reminder (e-mail) I have been searching the Internet but somehow there aint as many webparts as for example WordPress Addons. When I am lucky I can find individuals which made an Webpart which matches one of the specs :( The only things which come near my specs are http://www.envisionit.com/Products/Pages/ExtranetModuleforSharePoint.aspx http://userchangepassword.codeplex.com I am wondering where I can find community's which have lots of Webparts to download or sell. Does anybody of you know such community's?

    Read the article

  • How to access SharePoint web part properties?

    - by shannon.stewart
    I have created a feature for SharePoint 2007 that has a web part. I have added a custom property to the web part like so: [Personalizable(PersonalizationScope.Shared)] [WebBrowsable(true)] [Category("My Custom Properties")] [WebDisplayName("ServiceURL")] [WebDescription("The URL for the Wcf service")] public string ServiceURL { get; set; } Along with this web part, I've added a custom page that the web part will have a link to. I would like to reference the web part property from the custom page, but I don't know where these properties are stored. I've tried to access it using the code below, but both property collections don't have any properties stored. SPFeaturePropertyCollection spProperties = SPContext.Current.Site.Features[this.FeatureGuid].Properties; or SPFeaturePropertyCollection spProperties = SPContext.Current.Site.Features[this.FeatureGuid].Definition.Properties; My question is how can I get a reference to the web part property from other pages?

    Read the article

  • How can I tell whether a webpart that has been deployed to a site is a native webpart that ships wit

    - by program247365
    I have a SharePoint 2007 MOSS instance, and I'm on a fact-finding mission. There have been multiple developers, developing multiple webparts and deploying them (using VS2005/2008 SharePoint Extensions). I thought maybe I could look at the fields in the "Web Part Gallery" list in my site, and look by "Modified by", but it looks like a developer's name is on some of the out-of-the-box webparts somehow, and on ones I know are custom developed, they say "System Account" - so looking at that field in this list is a no go. I thought then maybe I could look at the "Group" to which each webpart was assigned but it looks like they were arbitrarily assigned to many different groups inconsistently - so using that piece of information is a no go. Here is my code I have for just looping through and getting the names of all the webparts. Is there any property I can access on the list items of webparts that would tell me whether it's a custom developed webpart? Any way to distinguish the custom webparts from the out-of-the-box ones? Is there another way to do this? #region Misc Site Collection Methods public static List<string> GetAllWebParts(string connectedSPInstanceUrl) { List<string> lstWebParts = new List<string>(); try { using (SPSite site = new SPSite(connectedSPInstanceUrl)) { using (SPWeb web = site.OpenWeb()) { SPList list = web.Lists["Web Part Gallery"]; foreach (SPListItem item in list.Items) { lstWebParts.Add(item.Name); } } } } catch (Exception ex) { lstWebParts.Add("Error"); lstWebParts.Add("Message: " + ex.Message); lstWebParts.Add("Inner Exception: " + ex.InnerException.ToString()); lstWebParts.Add("Stack Trace: " + ex.StackTrace); } return lstWebParts; } #endregion

    Read the article

  • Problem with ranking of search results in SharePoint 2007 if using the CONTAINS predicate

    - by mythicdawn
    While writing a front-end for the SharePoint Search web service for work, I did some quick testing with the MOSS Search Tool to make sure things were working right under the hood. What I found was that queries composed only of CONTAINS predicates (FREETEXT ones were fine) would have a rank of 1000 for any results that were returned. According to the documentation (http://msdn.microsoft.com/en-us/library/ms544086.aspx): "If the query returns a document because a non–full-text predicate evaluates to TRUE for that document, the rank value is calculated as 1000." Given that the behaviour I am seeing seems to contradict the documentation, is it the case that all queries that use only the CONTAINS predicate will produce ranking like this?

    Read the article

  • File not found - MOSS 2007

    - by isimple
    i received the following error on some Sharepoint Pages File Not Found. at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssembly(String assemblyName, Boolean throwOnFail) at System.Web.UI.TemplateParser.LoadAssembly(String assemblyName, Boolean throwOnFail) at System.Web.UI.MainTagNameToTypeMapper.ProcessTagNamespaceRegistrationCore(TagNamespaceRegisterEntry nsRegisterEntry) at System.Web.UI.MainTagNameToTypeMapper.ProcessTagNamespaceRegistration(TagNamespaceRegisterEntry nsRegisterEntry) at System.Web.UI.BaseTemplateParser.ProcessDirective(String directiveName, IDictionary directive) at System.Web.UI.TemplateControlParser.ProcessDirective(String directiveName, IDictionary directive) at System.Web.UI.PageParser.ProcessDirective(String directiveName, IDictionary directive) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) I used Assembly Binding Log Viewer Tool (fuslogvw.exe) to locate the assembly which is not bound and the assembly seems to be Microsoft.Sharepoint.Intl.Resources.dll. This assemlby is not being located by the application resulting in 'File not Found' error. Can anyone guide me how to solve this?

    Read the article

  • Export Sharepoint 2007 Custom List as RSS File

    - by matt
    Here's our scenario: We've created a sharepoint 2007 calendar on our intranet site We want to run a daily job to export a subset of the events to an rss file Another job will move the rss file to our public web site We have some funny restrictions where we can't simply publish the rss feed to the public. We have to go this export route. I'm not clear on how to accomplish step 2. Ideally, we wouldn't have to write a lot of custom code to accomplish this. Thanks.

    Read the article

  • persist values/variables from page to page

    - by Todd
    Hello, I'm wondering if there's another solution to my problem, that's considered more the Sharepoint way. FIrstly, my site is an Internet site, not Intranet. The problem is, all I'm trying to do is save values/variables from page to page in Sharepoint. I know the issue with Session Variables, but this seems to be the only way I can see to accomplish this. I know there are webparts that can store this value, but am I wrong in thinking this won't be persisted from page to page? Basically, I'll be extending the Content Query Web Part to dynamically filter it's results based off of a variable/value. The user chooses their 'area' from a dropdown, and the CQWP in the site will change and query results based off of this value (It will be a provincial structure as it is a Canadian site, so if someone chooses the province 'Ontario', this value is saved in a global variable, and these extended CQWP that are throughout the site, will get this value, and query lists flagged as Ontario). Is Session variables the only solution? Thanks everyone!

    Read the article

  • Regarding focusing on the web part on a site on sharepoint after clicking

    - by Brigadier Jigar
    Hi Friends, I have deployed 3 web parts on a site on Sharepoint server. In my 3rd web part, I have a functionality which needs a button click. Now when I click that button, the whole page gets refreshed and I have to scroll down to see the change that has happened due to clicking that functionality in 3rd web part. Is there any way by which when i click on any point of the web part, the page is reloaded but the control point or the focus remains to the same position where I clicked... Help me out. Regards, Jigar

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >