Search Results

Search found 2541 results on 102 pages for 'aspnet merge'.

Page 79/102 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • add several variables to dataframe, based on vector

    - by Andreas
    I am sure this is easy - but I can't figure it out right now. Basically: I have a long vector of variables: names <- c("first","second", "third") I have some data, and I now need to add the variables. I could do: data$first <- NA But since I have a long list, and I would like an automated solution. This doesn't work. for (i in 1:length(names)) (paste("data$", names[i],sep="") <- NA) The reason I want this, is that I need to vertically merge to dataframes, where one doesn't have all the variables it should have. Thanks in advance

    Read the article

  • How to index and search .doc files

    - by Jared
    I have an application that needs to have .doc files uploaded to it. These documents should then be index and the whole collection of documents should be searchable. This will run on a Windows Server, without Word installed, using IIS and SqlServer, but I'd rather not be tied to SqlServer's full text indexing. I was thinking of using Lucene.Net for the indexing part and was wondering what the best way to get the text out of the .doc files would be. I could probably extract the text by reading in the whole stream and then using a regEx to pull out any regular characters, but that seems hefty and prone to error. I saw an article on using iFilters that sounds promising, but I thought I'd put this out there since it's not something I'm familiar with. P.S. If it matters, these .doc files will have mail-merge fields in them and there's no other current alternative for the .doc format.

    Read the article

  • Deleting branches in git causes gitk to go wild

    - by a2h
    I decided to delete a few branches from a (personal project) repository of mine that were merged into master after confirming on #git that leftover branches aren't really necessary. However, gitk's visualisation of my repository's history as a result has been completely screwed up. Basically something like this: With those branches from commits appearing out of nowhere eventually going back into some other commits up ahead. A merge did not occur at all of the points, and I only had around 5 extra branches. Is this normal? Is there any fix for this?

    Read the article

  • how to overwrite rcov method with loading custom file

    - by kdoya
    I use rcov 0.9.8 on ruby 1.9.1 and rvm for ROR application. Rcov has problem on ruby 1.9. I found solution for encoding problems from here. --- lib/rcov/code_coverage_analyzer.rb~ 2010-03-21 16:15:47.000000000 +0100 +++ lib/rcov/code_coverage_analyzer.rb 2010-03-21 16:11:49.000000000 +0100 @@ -250,6 +250,10 @@ end def update_script_lines__ + if '1.9'.respond_to?(:force_encoding) + SCRIPT_LINES__.each{|k,v| v.each{|src| src.try(:force_encoding, 'utf-8')}} + end + @script_lines__ = @script_lines__.merge(SCRIPT_LINES__) end But I want to overwrite method with loading custom file. Rcov does not have require option. Any ideas?

    Read the article

  • Best datastructure for frequently queried list of objects

    - by panzerschreck
    Hello, I have a list of objects say, List. The Entity class has an equals method,on few attributes ( business rule ) to differentiate one Entity object from the other. The task that we usually carry out on this list is to remove all the duplicates something like this : List<Entity> noDuplicates = new ArrayList<Entity>(); for(Entity entity: lstEntities) { int indexOf = noDuplicates.indexOf(entity); if(indexOf >= 0 ) { noDuplicates.get(indexOf).merge(entity); } else { noDuplicates.add(entity); } } Now, the problem that I have been observing is that this part of the code, is slowing down considerably as soon as the list has objects more than 10000.I understand arraylist is doing a o(N) search. Is there a faster alternative, using HashMap is not an option, because the entity's uniqueness is built upon 4 of its attributes together, it would be tedious to put in the key itself into the map ? will sorted set help in faster querying ? Thanks

    Read the article

  • Rails 4 json return on API

    - by El - Key
    I'm creating an API on my application. I currently overrided the as_json method in my model in order to be able to get attached files as well as logo from Paperclip : def as_json( options = {} ) super.merge(logo_small: self.logo.url(:small), logo_large: self.logo.url(:large), taxe: self.taxe, attachments: self.attachments) end Then within my controller, I'm doing : def index @products = current_user.products respond_with @products end def show respond_with @product end The problem is that on the index, I don't want get all the attachments. I only need it on the show method. So I tried it : def index @products = current_user.products respond_with @products, except: [:attachments] end But unfortunately it's not working. How can I do to not send :attachments? Thanks

    Read the article

  • java servlet: generate zip file from BLOBs

    - by Zack
    I'm trying to zip a large number of pdf files (stored as BLOBs in the DB) and then return the zip as an attachment to the user. What's the best way to do this without running into memory issues? Another note: I actually need to merge some PDFs prior to adding them to the ZipOutputStream. Therefore, a couple PDFs will need to be stored in memory at a time. I assume it would be best to then store them as temporary files on the server before zipping them all?

    Read the article

  • Is it possible / recommendable sending HTML emails containing Javascript?

    - by Adriano Varoli Piazza
    This is mostly a rhetorical question, as far as I've checked the answer is 'don't even bother', but I wanted to be really sure. We have an email app, where you can send email to lists of subscribers. This is not spam: it's used, for example, by an university to send communications to its students, by a museum to send emails to subscribers, etc. Recently, I was asked by a prospective client if it was possible to send html messages containing javascript without being marked as spam. Not knowing, I did a short trip of the webs and what I've got is (percentages out of my posterior) 'half the clients won't display properly', 'half the clients will flag you as spam' and 'half the clients will have blocked javascript altogether' (There's clearly some superposition). So the best solution seems to be adding a link to a proper page if really necessary. Have you got a different experience? Do you know of any email-merge solution that provides this feature? Do you know if specific clients accept it or refuse to display html with javascript?

    Read the article

  • Need ILMerge hint

    - by lakhlaniprashant.blogspot.com
    Hi all, I'm trying to merge vintasoft barcode sdk with my data access dll and it's not working after ilmerge. Any ideas are welcome here is the error: IndexOutOfRangeException: Index was outside the bounds of the array.] 2.+.©(Byte[] param0) in :0 2.+..cctor() in :0 [TypeInitializationException: The type initializer for '2.+' threw an exception.] 2.+.¥S() in :0 Vintasoft.Barcode.WriterSettings..cctor() in :0 [TypeInitializationException: The type initializer for 'Vintasoft.Barcode.WriterSettings' threw an exception.] Vintasoft.Barcode.WriterSettings..ctor() in :0 Vintasoft.Barcode.BarcodeWriter..ctor() in :0 _Default.buttonGenerateBarcode_Click(Object sender, EventArgs e) in E:\ILMergeSample\WebBarcodeWriterDemo\QRBarcode.aspx.vb:27 System.EventHandler.Invoke(Object sender, EventArgs e) +0 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +110 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1565 Thanks in advance

    Read the article

  • Integrate ClickOnce update in a setup

    - by Erick
    As recent decision in my team we decided to deploy a software we develop for some time now in ClickOnce. Previously it was deployed with a merge module in a setup project. We had in mind to actually deploy the ClickOnce setup/updater in the web server setup. The problem here would not exactly to integrate it but to make sure to limit the human work to integrate it. Normally I guess it would be necessary to "add - file" and select the folder containing the setup.exe + publish.htm + application files + .application, but I fear that for each time we publish a new version to the hard drive we have to update the setup project as well. Would someone have some insight to help that ? (especially to not have to add the program_[version] folder inside application files that is created each time a new publish is done).

    Read the article

  • php search and replace

    - by Dave
    I am trying to create a database field merge into a document (rtf) using php i.e if I have a document that starts Dear Sir, Customer Name: [customer_name], Date of order: [order_date] After retrieving the appropriate database record I can use a simple search and replace to insert the database field into the right place. So far so good. I would however like to have a little more control over the data before it is replaced. For example I may wish to Title Case it, or convert a delimited string into a list with carriage returns. I would therefore like to be able to add extra formatting commands to the field to be replaced. e.g. Dear Sir, Customer Name: [customer_name, TC], Date of order: [order_date, Y/M/D] There may be more than one formatting command per field. Is there a way that I can now search for these strings? The format of the strings is not set in stone, so if I have to change the format then I can. Any suggestions appreciated.

    Read the article

  • Error(2,7): PLS-00428: an INTO clause is expected in this SELECT statement

    - by omgzor
    I'm trying to create this trigger and getting the following compiler errors: create or replace TRIGGER RESTAR_PLAZAS AFTER INSERT ON PLAN_VUELO BEGIN SELECT F.NRO_VUELO, M.CAPACIDAD, M.CAPACIDAD - COALESCE(( SELECT count(*) FROM PLAN_VUELO P WHERE P.NRO_VUELO = F.NRO_VUELO ), 0) as PLAZAS_DISPONIBLES FROM VUELO F INNER JOIN MODELO M ON M.ID = F.CODIGO_AVION; END RESTAR_PLAZAS; Error(2,7): PL/SQL: SQL Statement ignored Error(8,5): PL/SQL: ORA-00933: SQL command not properly ended Error(8,27): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: begin case declare end exception exit for goto if loop mod null pragma raise return select update while with <an identifier> <a double-quoted delimited-identifier> <a bind variable> << close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe Error(2,1): PLS-00428: an INTO clause is expected in this SELECT statement What's wrong with this trigger?

    Read the article

  • Sending url params and POST body to a MVC 2 Controller

    - by Luiggi
    Hi, I'm having some issues trying to make a HTTP PUT (or POST) using WebClient against a MVC 2 controller. The exception is: The parameters dictionary contains a null entry for parameter 'total' of non-nullable type 'System.Int32' for method 'System.Web.Mvc.ActionResult Company(System.Guid, Int32, Int32, System.String)' The controller action is: [HttpPut] public ActionResult Company(Guid uid, int id, int total, string url) The route is: routes.MapRoute( "CompanySet", "job/active/{uid}/company/{id}", new { controller = "Job", action = "Company" } ); As you may see, what I want is to send the 'uid' and 'id' parameters via url, but the 'total' and 'url' parameters as part of the PUT or POST body. I've also tried to merge the latter parameters into a class (i.e., CompanySetMessage), doing it no longer raises an exception but I dont receive the values on the server side. Any ideas? Thank you!

    Read the article

  • How do I track a branch of another repository on the same machine?

    - by Daniel Stutzbach
    I have two private repositories on one machine. Let's call them repo-A and repo-B, which are the directories ~/repo-A and ~/repo-B, respectively. repo-A has two relevant branches: master and live. I'd like to set up repo-B to track repo-A's live branch, so that git pull will pull any updates from repo-A's live branch into repo-B's master branch. Right now, I have the following in repo-B's .git/config: [remote "origin"] url = /home/stutzbach/repo-A/ fetch = +refs/heads/live:refs/remotes/origin/live [branch "master"] remote = origin merge = refs/heads/master However, when I run git pull, it seems to pull from repo-A's master branch. Obviously, I don't have it set up right. What's the right way?

    Read the article

  • SVN Feature Branch Method

    - by Seth
    I am getting a SVN server setup and will be using the feature branch method. I plan on having 1+ branches making up a release tag. How do I merge (?) multiple branches into the release tag, while still maintaining diffs and such? I've given an example of our workflow below. Multiple devs pull to local Create feature branch Commit to branch Use branch to build QA (Here is where my question starts) I need to have all the branches for the next build to be put into a build tag to be used to build Production

    Read the article

  • Diff multiple files in perforce across a revision range

    - by Thanatos
    I'd like to diff a bunch of lines across several revisions. Like, I'd like to see a.c, b.c, and c.c from changelist X to changelist Y. p4 diff2 a.c@X a.c@Y (where X & Y are changelist numbers) seems to work, but only sometimes. Specifically, if a.c is non-existent at X, I don't get a diff. I'd like to be able to get the diff (even though it'll be the whole file with only adds) anyways. To get the bigger picture: I have several files, across several commits, and I'd like to merge the diffs of these files in these commits, to basically say "this is a diff of what changed in this set of files during this set of changelists"

    Read the article

  • Branch structure for a web site

    - by steve_d
    I was recently reading the TFS Branching Guide and it suggests a branch for every release. For a web site, there is only one "version" released at a time. In that case is it appropriate to have a single "Production" branch? Then, during the process of preparing for a release, you merge changes from the Main branch into Production. (As opposed to the suggestion to branch each release.) If you need to do a hotfix, do it in the Production branch, then reverse integrate into Main. Doing it this way allows you to keep configuration files for Production intact in the Production branch.

    Read the article

  • Current status on Insight and gdb7.x?

    - by Johan
    Hi Does anybody know the current status on Insight and if they are working on integrating gdb7.x so there may be a new updated version? There don't seem to be much activity on the home page and the mail-lists. And does someone know if somebody over in the gdb camp is trying merge the two projects together? There is a least a interesting todo at the Insight page ( http://sourceware.org/insight/faq.php#q-4.2 ) " 2. Get Insight integrated and accepted into the GDB mainline." Thanks

    Read the article

  • ClearCase UCM Mainline Configuration Management Pattern Question

    - by cogmios
    A configuration management pattern question (using Rational ClearCase UCM) When I use the mainline approach I create new releases by: - create release 1 from mainline - on a certain moment baseline release 1, deliver release 1 to mainline - create release 2 from mainline - on a certain moment baseline release 2, deliver release 2 to mainline - create release 3 from mainline - etc... Works very nice because the pathname is /main/release 3/latest instead of /main/release 1/release 2/release 3/latest etc... However... when in release 1 are new elements that have to be propagated to later releases I can not use the mainline since the mainline is already on e.g. release 4. The only thing I can do is deliver/merge from release 1 directly to release 2. The bad thing is that the pathname then becomes /main/release 1/release 2/latest for that files (and possibly later releases). That is I think not in line with the mainline approach. What am I doing wrong?

    Read the article

  • What causes this error..please run "exec sp_register_custom_scripting 'CUSTOM_SCRIPT', your_script???

    - by larryr
    Configuration SQL 2005 (Server A) replicates to SQL 2008(Server B) which replicates to SQL 2008(Server C). I recently added a column (to Server A) to a replicated table via script & the DDL change replicated to Server B with out a problem. When the DDL change replicated to Server C, I received the error below. 'DDL replication failed to refresh custom procedures, please run "exec sp_register_custom_scripting 'CUSTOM_SCRIPT', your_script, 'EDI from xx', 'table_name_here' "and try again (Source: MSSQLServer, Error number: 21814)' These subscriptions (on Server B to Server C) were created via a script below. **exec sp_addsubscription @publication = N'EDI to XLOCX', @subscriber = N'RXLOCXS-SQLA', @destination_db = N'EDI', @subscription_type = N'Push', @sync_type = N'replication support only', @article = N'all', @update_mode = N'read only', @subscriber_type = 0 exec sp_addpushsubscription_agent @publication = N'EDI to XLOCX (Merge)', @subscriber = N'RXLOCXS-SQLA', @subscriber_db = N'EDI', @job_login = N'ROUSES.COM\RXLOCXSQLREPL', @job_password = N'XPASSWORDX', @subscriber_security_mode = 1, @frequency_type = 4, @frequency_interval = 1, @frequency_relative_interval = 1, @frequency_recurrence_factor = 1, @frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day =3300, @active_end_time_of_day = 235959, @active_start_date = 20070923, @active_end_date = 99991231, @enabled_for_syncmgr = N'False', @dts_package_location = N'Distributor'** GO So the million dollar question is, why do I get the error 'exec sp_register_custom_scripting 'CUSTOM_SCRIPT', your_script' when I add a column to a table in the EDI to XLOCX publication??? AHIA, LarryR...

    Read the article

  • how to configure hibernate not to update @Version on each access to entity

    - by radai
    i have a simple query that returns an entity, and when i look at hibernate SQL output i see that when i execute this query hibernate updates the @Version field (on each consecutive read the @version field is updated). i dont modify anything in the entity i fetch, and i dont pass is as an argument to either persist or merge. this effectively means every read i make turns into a read+write. i've tried setting the lock mode t oboth NONE (jpa 2) and READ (jpa 1) to no avail. is there any way to achieve this? if so, is there any way to set this as the default behavior in persistence.xml in some way ? im using jpa2 over hibernate 3.6

    Read the article

  • Custom control packaging

    - by CSharpened
    Quick question: You are building a setup for your application. The application contains a custom control developed by you, which will be shared across multiple applications. How should you package the custom control? Package the control in a Merge Module (.msm) and add the .msm file to a Windows Installer project. Package the control into a cabinet project (.cab) and add the .cab file to a Windows Installer project. Create a separate directory for the control and then package it in a Windows Installer project along with the rest of the project files. Package the control as a Web setup project and create a link to that project from the Windows Installer project. Any ideas?

    Read the article

  • Visual C++ 2008: Finding the cause of slow link times

    - by ckarras
    I have a legacy C++ project that takes an annoyingly long time to build (several minutes, even for small incremental changes), and I found most of the time was spent linking. The project is already using precompiled headers and incremental compilation. I have enabled the "/time" command line parameter in the hope I would get more details about what is slowing the linker, and got the following output: 1>Linking... 1> MD Merge: Total time = 59.938s 1> Generate Transitions: Total time = 0.500s 1> MD Finalize: Total time = 7.328s 1>Pass 1: Interval #1, time = 71.718s 1>Pass 2: Interval #2, time = 8.969s 1>Final: Total time = 80.687s 1>Final: Total time = 80.953s Is there a way to get more details about each of these steps? For example, I would like to find if they are spending most time linking to a specific .lib or .obj file. Also, is there any documentation that explains what each of these steps do?

    Read the article

  • Custom Trigger Scripts for Bot (Xcode 5 CI)

    - by Mishal Shah
    I am working on setting up CI for my iOS application and I am facing some issues. Where is a good place to find documents on Bot? I have seen the Xcode help but cant find any good example, also watched the CI video from 2013 conference How do i create a custom trigger script, so every time a developer commits their code it will automatically trigger the bot. How do I merge the code to master only if Test successfully passes the bot? Here is where I found info about trigger scripts https://help.apple.com/xcode/mac/1.0/#apdE6540C63-ADB5-4B07-89B7-6223EC40B59C Example values are shown with each setting. Schedule: Choose to run manually, periodically, on new commits, or on trigger scripts. Thank you!

    Read the article

  • Cloud sync between iPad/iPhone app

    - by Macatomy
    I have a Core Data app that will end up being an iPhone/iPad universal application. I would like to implement cloud syncing so that an iPhone and an iPad both running the app could share data. I'm planning to use the recently released Dropbox API. Does anyone have any thoughts on the best way to go about doing this? The Dropbox API allows for apps to store files on the cloud. What I was thinking was to original store the database (sqlite) for the app on the cloud and then download that database, but I then realized that using that method would make it painfully difficult to merge changes (rather than replacing the whole database). Any thoughts are appreciated. Thanks.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >