Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 52/737 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • How can I specify a single .config file for multiple EXE projects in .NET

    - by Russ
    I have a project that I am breaking up into multiple .exe projects. I still plan on publishing them, using click once, into the same location at the same time, and I would like to use the same config file. I have added the app.config to each project using the "Add link" option in Visual Studio, which is great for debugging, but in production, when I compile each exe project, the app.config is not copied into the "master project"'s bin folder. example: master.exe with master.exe.config master.exe may launch order.exe based on user settings master.exe may launch returns.exe based on user settings master, order, and returns will all reside in the same folder, and should share a single config file.

    Read the article

  • Enable export to XML via HTTP on a large number of models with child relations

    - by Vasil
    I've a large number of models (120+) and I would like to let users of my application export all of the data from them in XML format. I looked at django-piston, but I would like to do this with minimum code. Basically I'd like to have something like this: GET /export/applabel/ModelName/ Would stream all instances of ModelName in applabel together with it's tree of related objects . I'd like to do this without writing code for each model. What would be the best way to do this?

    Read the article

  • How to transfer large files from desktop to server ( .NET)

    - by rahulchandran
    I am writing a .NET 2.0 based desktop client that will send large files ( well largish under 2GB) to a server. Need to develop the server as well. Server can be on any technology It should be secure so an underlying SSL stream is needed What are my options. Any obvious caveats etc I should be aware of To my mind the simplest solution is to open a tcp\ip connection over SSL to the server and send n packets each of size M bytes and then have the server append the chunks to the file and finally send an EOF packet as well IS this horrible. Will the perf suck on the server with all these disk writes What are any other clever options. I am limited to .NET 2.0 on the client if I did move to a WCF client will it buy be something magical and cool for this scenario Thanks

    Read the article

  • Paging a UIScrollView with a large PDF

    - by Fousa
    I try to create a simple UIScrollView with paging. And I want to be able to scroll through a large PDF document, but this gives me some problems... I tried the following options: Convert all the PDF pages to UIImages at startup, this works, but is very slow on start Manually drawing the PDF page in the drawRect, but yet again this was slow... And I prefer not to load everything at startup but to do it during the usage. Did anyone did this recently? Can't seem to find a nice example project. Thnx! Jelle

    Read the article

  • Setting ownership/permissions for symfony2 and other web projects

    - by Handonam
    I've been very confused as to how to set permissions and user/groups for my sites. It is particularly one of my weakest suits My curent problem is that I often find myself running into a situation where if i view a particular page, it won't have permissions to write to cache or logs. At this point I'll set the ownership towards apache. Then, in other cases, if i try to run internal scripts, for example, I can't write to these cache/log files because i set them for apache. Currently, my symfony2 files are all registered to me as a part of staff (Handonam:Staff). I've seen various people creating groups such as www-data, apache, etc, and using users such as theirselves (e.g. Handonam) or www as a part of those groups. So my question is: For symfony2 and other web projects, what's generally the best setup for user/group setup so that both apache and myself can interact with these files, while maintaining decent security?

    Read the article

  • Newbie programmer looking for a fun, small project (pref. C++/Python)

    - by Francisco P.
    Hello everyone, I have some experience in Scheme and C++ (read: a semester of each) I know the very basics of Python (used it for physics simulations with the Visual Python module). Can you recommend me some fun and small (i.e. don't take much time) projects on either Python or C++? I have no real preferences, just that it is fun :P Thanks for your time! PS: I've tried projecteuler and python challenge. Euler is good, but more about math than coding, and py challenge just didn't work for me.

    Read the article

  • namespacing large javascript like jquery

    - by frenchie
    I have a very large javascript file: it's over 9,000 lines. The code looks like this: var GlobalVar1 = ""; var GlobalVar2 = null; function A() {...} function B(SomeParameter) {...} I'm using the google compiler and the global variables and functions get renamed a,b,c... and there's a good change that there might be some collision later with some outside code. What I want to do is have my code organized like the jquery library where everything is accessible with $. Is there a way to namespace my code so that everything is behind a # character for example. I'd like to have this to call my code: #.GlobalVar #.functionA(SomeParameter) How can I do this? Thanks.

    Read the article

  • Redirecting a large number of URLs with htaccess or php header

    - by Peter
    I have undergone a major website overhaul and now have 5,000+ incoming links from search engines and external sites, bookmark services etc that lead to dead pages or 404 errors. A lot of the pages have corresponding "permalinks" or known replacement hierarchy/URL structure. I've started to list the main redirects with htaccess or physical files with simply a header location reidrect which is clearly not sustainable! What would be the best method to list all of the old link addresses and their corresponding new addresses with htaccess, php headers, mysql, sitemap file or is it better to have all broken links and wait for search engines etc to re-index my site? Are there any implications for having a large number of redirecting files for this temporary period until links are reset?

    Read the article

  • Calculating very large exponents in python

    - by miraclesoul
    Dear All, Currently i am simulating my cryptographic scheme to test it. I have developed the code but i am stuck at one point. I am trying to take : g**x where g = 256 bit number x = 256 bit number Python hangs at this point, i have read alot of forums, threads etcc but only come to the conclusion that python hangs, as its hard for it to process such large numbers. any idea how can it be done? any two line piece of code, any library, anything that can be done.(ALSO PLEASE I AM A NEW PYTHON USER AND THIS IS FIRST TIME I DID PROGRAMMING IN IT, SO NO COMPLEX METHODS ...HOPE YOU UNDERSTAND :s)

    Read the article

  • Popular open source projects using Zend Framework

    - by Alexander
    Hello, I am trying to find any open source project based on Zend Framework. Something well written and as popular as Wordpress or Drupal to see actual benefits of ZF as well as possibly use it as an example. The only 'showcase' I managed to find is http://framework.zend.com/wiki/pages/viewpage.action?pageId=14134. But this list looks confusing as for the 'official' PHP framework. The same is about ZF statistics by numbers (http://framework.zend.com/about/numbers) - 10 million downloads against 400 actual projects which is less than 500 examples in the user guide... Also Yahoo chose Symfony for their bookmarks service not ZF... Am I missing something? Thank you!

    Read the article

  • Using ASP.Net 4.0 for new Dev projects

    - by JBeckton
    I am currently in the early stages of developing a couple web applications, I have not written any code yet as I am still just gathering requirements and scoping things out. I want to target ASP.Net 4.0 winforms as the platform for these apps but I want to make sure there are no glaring issues with this new version before I commit. I understand that if I was porting an existing app from 2.0, 3.5 to 4.0 there may be issues but I am starting from scratch on these projects and plan to write these apps to support the new features of 4.0. Should I wait for the first service pack to come out? Just seems like more work to start with 3.5 now only to go back through and tweak things for 4.0 in just a few months or even before I finish the app. Our servers are Win 2K3 with IIS6 and MS SQL 2000, Should I expect any problems with VS 2010 and MS SQL 2000 in regards to Linq to SQL and EF?

    Read the article

  • Please Recommend CS Project books

    - by kunjaan
    Programming Collective Intelligence is an awesome way to get your feet wet in Machine learning. I am looking for similar books which has small but interesting programming projects. Do you have any recommendations? Edit: It need not be related to machine learning. It could be any programming project-based books. Thanks. Edit2: Collective Intelligence in Action is one more book that looks at some interesting CS stuffs. Do you guys have any similar recommendations?

    Read the article

  • Region or ItemsSource for large data set in ListBox

    - by Ryan
    I'm having trouble figuring out what the best solution is given the following situation. I'm using Prism 4.1, MEF, and .Net 4.0. I have an object Project that could have a large number (~1000) of Line objects. I'm deciding whether it is better to expose an ObservableCollection<LineViewModel> from my ProjectViewModel and manually create the Line viewmodels there OR set the ListBox as it's own region and activate views that way. I'd still want my LineViewModel to have Prism's shared services (IEventAggregator, etc.) injected, but I don't know how to do that when I manually create the LineViewModel. Any suggestions or thoughts?

    Read the article

  • How to efficiently deal with a large amount of HTML5 canvas pixel data over websockets

    - by user730569
    Using imageData = context.getImageData(0, 0, width, height); JSON.stringify(imageData.data); I grab the pixel data, convert it to a string, and then send it over the wire via websockets. However, this string can be pretty large, depending on the size of the canvas object. I tried using the compression technique found here: JavaScript implementation of Gzip but socket.io throws the error Websocket message contains invalid character(s). Is there an effective way to compress this data so that it can be sent over websockets?

    Read the article

  • Build Event Macros for Other Projects in the Solution

    - by Adam Driscoll
    Is it possible to reference other projects' properties via a macro within a build event? For example: "Tool1" outputs to directory ..\..\bin\Release "Component1" uses "Tool1" in its post-buildevent To get to "Tool1", "Component1"'s project must do something like $(SolutionDir)bin\Release This requires that Tool1 always output to ..\..\bin\Release. If this is changed this breaks the other project. I know there is no indication to this within the macro list but is there a way to reference another project? Maybe like $(OtherProject.TargetDir)... I know WIX has a similar syntax [$(var.OtherProject.TargetDir)] but I think that may be a different mechanism.

    Read the article

  • Large number of tables and Hibernate memory consumption

    - by Vedran
    I'm working on a large ERP project which has database model with about 2100 tables. With "only" 500 tables mapped with Hibernate, application deployed on the web server takes about 3GB of working memory. Is there any way to reduce Hibernate's metamodel memory footprint when using that many tables in one persistence unit? Or should I just give up on ORMs and go with plain old JDBC (or even jOOQ)? Right now I'm using Hibernate 4.1.8, Spring 3.1.3, JBoss AS 7.1 and working with MSSQL database. Edit: JavaMelody memory histogram output - with 2000 generated test tables that are a bit smaller in scope from the original db model (hence 'only' 1.3GB of spent memory)

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • sample/good rails projects to learn from

    - by learningrails
    I am just starting with Rails. I've read through the AWDR book and am currently working on a side project in rails. I want to get an idea of what a good rails project should look like in order to learn what the best practices are. Can you guys point me to some good rails projects on github that not only work well but are well written? Or am I better off reading Rails best practices? if so, any good ones?

    Read the article

  • How to retrieve large data from oracle database using vbscript

    - by allenzzzxd
    Hi guys, I'm now working on vbscript to do some test. Actuelly, I want to retrieve a large amount of data from an oracle database, so I write the code like this: sql = "Select * from CORE_DB where MC = '" & mstr & "' " Set myrs = db_execute_query(curConnection, sql) Then I count the rows in myrs,there are 248 rows. So then I do a For loop to retrieve some fields of each row. For k = 0 To db_get_rows_count(myrs) But then I found that the content of the row k when k 133 was always equal to k = 133. So this makes an error. As I think, there may be a limit size of mrys ? Could anyone light me about this? Thanks a lot in advance

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • iOS sample projects to learn from

    - by DerMike
    I am just starting iOS development. I read some tutorials, watched stuff on iTunes U and wrote some sample code myself. Now I want to take the next step. I want to learn about best practices for iOS development in XCode. Are there any well written and well organized iOS projects that one could take a look at? (As I see it, iOS is not exactly the place for open source enthusiasts, however.) Thanks Mike.

    Read the article

  • Using pow() for large number

    - by g4ur4v
    I am trying to solve a problem, a part of which requires me to calculate (2^n)%1000000007 , where n<=10^9. But my following code gives me output "0" even for input like n=99. Is there anyway other than having a loop which multilplies the output by 2 every time and finding the modulo every time (this is not I am looking for as this will be very slow for large numbers). #include<stdio.h> #include<math.h> #include<iostream> using namespace std; int main() { unsigned long long gaps,total; while(1) { cin>>gaps; total=(unsigned long long)powf(2,gaps)%1000000007; cout<<total<<endl; } }

    Read the article

  • Unable to return large result set ORA-22814

    - by rvenugopal
    Hello All I am encountering an issue when I am trying to load a large result set using a range query in Oracle 10g. When I try a smaller range (1 to 100), it works but when I try a larger range(1 and 1000), I get the following error "ORA-22814: attribute or element value is larger than specified in type" error. I have a basic UDT (PostComments_Type) and I have tried using both a VArray and a Table type of PostComments_Type but that hasn't made a difference. Your help is appreciated --Thanks Venu PROCEDURE RangeLoad ( floorId IN NUMBER, ceilingId IN NUMBER, o_PostComments_LARGE_COLL_TYPE OUT PostComments_LARGE_COLL_TYPE -- Tried using as VArray and also Table type of PostComments_Type )IS BEGIN SELECT PostComments_TYPE ( PostComments_ID, ... ) BULK COLLECT INTO o_PostComments_LARGE_COLL_TYPE ------------This is for VARRAY/Table Type. So bulk operation FROM PostComments WHERE PostComments_ID BETWEEN floorId And ceilingId; END RangeLoad;

    Read the article

  • Mysql Master Slave Replication on Large Database table (how to sync initial data)

    - by Brian Lovett
    We have a production server and a dev server. We have found that backups are nearly impossible on the production server because of the query volume we experience. So, we're looking at setting up replication with our dev server being the slave. This is ideal because we can afford to lock the tables on that server and additionally it will be nice to have up to date data for the developers. Now, the issues. The production server can't really be taken down or locked at this point, at least not easily. We have a high query volume and fairly large 30+ GB innodb tables. Both servers are running all innodb and are also both on mysql 5.1. What can we do to sync the data initially to get replication started? I've tried a few options, but so far, none have worked.

    Read the article

  • Transforming large Xml files

    - by Chad
    I was using this extension method to transform very large xml files with an xslt. Unfortunately, I get an OutOfMemoryException on the source.ToString() line. I realize there must be a better way, I'm just not sure what that would be? public static XElement Transform(this XElement source, string xslPath, XsltArgumentList arguments) { var doc = new XmlDocument(); doc.LoadXml(source.ToString()); var xsl = new XslCompiledTransform(); xsl.Load(xslPath); using (var swDocument = new StringWriter(System.Globalization.CultureInfo.InvariantCulture)) { using (var xtw = new XmlTextWriter(swDocument)) { xsl.Transform((doc.CreateNavigator()), arguments, xtw); xtw.Flush(); return XElement.Parse(swDocument.ToString()); } } } Thoughts? Solutions? Etc.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >