Search Results

Search found 10726 results on 430 pages for 'big rich'.

Page 358/430 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • question about counting sort

    - by davit-datuashvili
    hi i have write following code which prints elements in sorted order only one big problem is that it use two additional array here is my code public class occurance{ public static final int n=5; public static void main(String[]args){ // n is maximum possible value what it should be in array suppose n=5 then array may be int a[]=new int[]{3,4,4,2,1,3,5};// as u see all elements are less or equal to n //create array a.length*n int b[]=new int[a.length*n]; int c[]=new int[b.length]; for (int i=0;i<b.length;i++){ b[i]=0; c[i]=0; } for (int i=0;i<a.length;i++){ if (b[a[i]]==1){ c[a[i]]=1; } else{ b[a[i]]=1; } } for (int i=0;i<b.length;i++){ if (b[i]==1) { System.out.println(i); } if (c[i]==1){ System.out.println(i); } } } } // 1 2 3 3 4 4 5 1.i have two question what is complexity of this algorithm?i mean running time 2. how put this elements into other array with sorted order? thanks

    Read the article

  • how to make accessor for Dictionary in a way that returned Dictionary cannot be changed C# / 2.0

    - by matti
    I thought of solution below because the collection is very very small. But what if it was big? private Dictionary<string, OfTable> _folderData = new Dictionary<string, OfTable>(); public Dictionary<string, OfTable> FolderData { get { return new Dictionary<string,OfTable>(_folderData); } } With List you can make: public class MyClass { private List<int> _items = new List<int>(); public IList<int> Items { get { return _items.AsReadOnly(); } } } That would be nice! Thanks in advance, Cheers & BR - Matti NOW WHEN I THINK THE OBJECTS IN COLLECTION ARE IN HEAP. SO MY SOLUTION DOES NOT PREVENT THE CALLER TO MODIFY THEM!!! CAUSE BOTH Dictionary s CONTAIN REFERENCES TO SAME OBJECT. DOES THIS APPLY TO List EXAMPLE ABOVE? class OfTable { private string _wTableName; private int _table; private List<int> _classes; private string _label; public OfTable() { _classes = new List<int>(); } public int Table { get { return _table; } set { _table = value; } } public List<int> Classes { get { return _classes; } set { _classes = value; } } public string Label { get { return _label; } set { _label = value; } } } so how to make this immutable??

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • Find the number of congruent triangles?

    - by avd
    Say I have a square from (0,0) to (z,z). Given a triangle within this square which has integer coordinates for all its vertices. Find out the number of triangles within this square which are congruent to this triangle and have integer coordinates. My algorithm is as follows-- 1) Find out the minimum bounding rectangle(MBR) for the given triangle. 2) Find out the number of congruent triangles, y within that MBR, obtained after reflection, rotation of the given triangle. y can be either 2,4 or 8. 3) Now find out how many such MBR's can be drawn within the given big square, say x; (This is similar to finding number of squares on a chess board) 4) x*y is the required answer. Am I counting some triangles more than once or I am missing something by this algorithm? It is a problem on online judge? It gives me wrong answer. I have thought a lot about it, but I am not able to figure it out.

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • SIMPLE PHP MVC Framework [closed]

    - by Allen
    I need a simple and basic MVC example to get me started. I don't want to use any of the available packaged frameworks. I am in need of a simple example of a simple PHP MVC framework that would allow, at most, the basic creation of a simple multi-page site. I am asking for a simple example because I learn best from simple real world examples. Big popular frameworks (such as code igniter) are to much for me to even try to understand and any other "simple" example I have found are not well explained or seem a little sketchy in general. I should add that most examples of simple MVC frameworks I see use mod_rewrite (for URL routing) or some other Apache-only method. I run PHP on IIS. I need to be able to understand a basic MVC framework, so that I could develop my own that would allow me to easily extend functionality with classes. I am at the point where I understand basic design patterns and MVC pretty well. I understand them in theory, but when it comes down to actually building a real world, simple, well designed MVC framework in PHP, I'm stuck. Edit: I just want to note that I am looking for a simple example that an experienced programmer could whip up in under an hour. I mean simple as in bare bones simple. I don't want to use any huge frameworks, I am trying to roll my own. I need a decent SIMPLE example to get me going.

    Read the article

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • Hover-to-click on jQuery UI datePicker 'next month' and 'prev month' not working

    - by user316727
    Hi there, I have a calendar which is meant to look much like the calendar in Outlook. There is a big field representing the hours in a day, and there is a date navigator. The navigator is the jQuery UI Datepicker. I want users to be able to navigate to a new day by clicking on a date in the datepicker, but also to be able to drag appointments over the datepicker and drop them on a specific date. I have that working now. I also want users, while they are dragging an appointment, to be able to move to next or previous month simply by hovering over the datepicker. So I've added a mouseenter and mouseleave event: one runs a setInterval function which sends a click every 1,5 seconds; the other cancels the interval function. This is where all sorts of things go wrong. As soon as one click has been triggered, the mouseleave function no longer works: in other words, the datepicker keeps flipping over to another month every 1,5 seconds. It seems that the datePicker interferes, or that the click event causes other things to go wrong. What can I do?

    Read the article

  • Initialize listitem with blanks?

    - by VBartilucci
    Say I have a list made up of a listitem which contains three strings. I add a new listitem, and try to assign the values of said strings from an outside source. If one of those items is unassigned, the value in the listitem remains as null (unassigned). As a result I get an error if I try to assign that value to a field on my page. I can do a check on isNullOrEmpty for each field on the page, but that seems inefficient. I'd rather initialize the values to "" (Empty string) in the codebehind and send valid data. I can do it manually: ClaimPwk emptyNode = new ClaimPwk(); emptyNode.cdeAttachmentControl = ""; emptyNode.cdeRptTransmission = ""; emptyNode.cdeRptType = ""; headerKeys.Add(emptyNode); But I have some BIG list items, and writing that for those will get tedious. So is there a command, or just plain an easier way to initialize a listitem to empty string as opposed to null? Or has anyone got a better idea?

    Read the article

  • git commit best practices

    - by Ivan Z. Siu
    I am using git to manage a C++ project. When I am working on the projects, I find it hard to organize the changes into commits when changing things that are related to many places. For example, I may change a class interface in a .h file, which will affect the corresponding .cpp file, and also other files using it. I am not sure whether it is reasonable to put all the stuff into one big commit. Intuitively, I think the commits should be modular, each one of them corresponds to a functional update/change, so that the collaborators could pick things accordingly. But seems that sometimes it is inevitable to include lots of files and changes to make a functional change actually work. Searching did not yield me any good suggestion or tips. Hence I wonder if anyone could give me some best practices when doing commits. Thanks! PS. I've been using git for a while and I know how to interactively add/rebase/split/amend/... What I am asking is the PHILOSOPHY part.

    Read the article

  • What are CAD apps written in, and how are they organized ?

    - by ldigas
    What are CAD applications (Rhino, Autocad) of today written in and how are they organized internally ? I gave as an example, Autocad and Rhino, although I would love to hear of other examples as well. I'm particularly interested in knowing what is their backend written in (multilanguage ?) and how is it organized, and how do they handle their frontend (GUI) in real time ? Do they use native windows API's or some libraries of their own, since I imagine, as good as may be, the open source solutions on today's market won't cut it. I may be wrong ... As most of you who have used them know, they handle amongs other things relatively complex rotational operations in realtime (shading is not interesting me). I've been doing some experiments with several packages recently, and for some larger models found that there is considerable difference in speed in, for example, programed rotation (big full ship models) amongst some of them (which I won't name). So I'm wondering about their internals ... Also, if someone knows of some book on the subject, I'd be interested to hear of it.

    Read the article

  • Deploy to web container, bundle web container or embed web container...

    - by Jason
    I am developing an application that needs to be as simple as possible to install for the end user. While the end users will likely be experience Linux users (or sales engineers), they don't really know anything about Tomcat, Jetty, etc, nor do I think they should. So I see 3 ways to deploy our applications. I should also state that this is the first app that I have had to deploy that had a web interface, so I haven't really faced this question before. First is to deploy the application into an existing web container. Since we only deploy to Suse or RedHat this seems easy enough to do. However, we're not big on the idea of multiple apps running in one web container. It makes it harder to take down just one app. The next option is to just bundle Tomcat or Jetty and have the startup/shutdown scripts launch our bundled web container. Or 3rd, embed.. This will probably provide the same user experience as the second option. I'm curious what others do when faced with this problem to make it as fool proof as possible on the end user. I've almost ruled out deploying into an existing web container as we often like to set per application resource limits and CPU affinity, which I believe would affect all apps deployed into a web container/app server and not just a specific application. Thank you.

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • format an xml string in Ruby

    - by user1476512
    given an xml string like this : <some><nested><xml>value</xml></nested></some> what's the best option(using ruby) to format it readable like : <some> <nested> <xml>value</xml> </nested> </some> I've found an answer here: what's the best way to format an xml string in ruby?, which is really helpful. But it formats xml like: <some> <nested> <xml> value </xml> </nested> </some> As my xml string is a little big in length. So it is not readable in this format. Thanks in advance!

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • What's best choice career-wise, to know a little about a lot or a lot about a little?

    - by nimo
    I work as a developer at a rather small company and we are providing a web application that is used by a big base of customers. Because we are so small everyone have to be able to do a lot of different tasks. It ranges from advanced support, developing the product (programming: c/c++, c#, php, sql, javascript, html, css), handle network configuration and network related issues and even sometimes go on sales meetings with potential customers. My concern is that I don't really specialize in any specific area. I know and learn little about a lot. I have graduated from school two years ago and this is my first real employment and when I look at other positions out there they always require so and so many years of experience in a specific area (for example 5 years of C#). For me to get that kind of specialized experience will be really hard at my current job. My question for you is what is, in your opinion, best choice career-wise, to know a little about a lot or a lot about a little? What path did you take? pros and cons that comes with that choice.

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • What is the best way to include Javascript?

    - by Paul Tarjan
    Many of the big players recommend slightly different techniques. Mostly on the placement of the new <script>. Google Anayltics: (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Facebook: (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }());: Disqus: (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); (post others and I'll add them) Is there any rhyme or reason for these choices or does it not matter at all?

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Use of unassigned local variable 'xxx'

    - by Tomislav
    I'm writing a database importer from our competitors to ours database:) I have a code generator which create Methods form import to our database like public void Test_Import_Customer_1() // variables string conn; string sqlSelect; string sqlInsert; int extID; string name; string name2; DateTime date_inserted; sqlSelect="select id,name,date_inserted from table_competitors_1"; oledbreader reader = new GetOledbRader(sqlString,conn); while (reader.read()) { name=left((string)myreader["name"],50); //limitation of my field date_inserted=myreader["date_inserted"]; sqlInsert=string.Format("insert into table(name,name2,date_inserted)values '{0}', '{1}', {2})",name,name2,date_inserted); //here is the problem name2 "Use of unassigned local variable" ExecuteSQL(sqlInsert) } As different companies database has different fields i can not set value to each variable and there is a big number of tables to go one variable to next. like sqlSelect_Company_1 = "select name,date_inserted from table_1"; sqlSelect_Company_2 = "select name,name2 from table_2"; is there a way to override the typing of each variable one by one with default values?

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >