Search Results

Search found 31207 results on 1249 pages for 'atg best practice in industries'.

Page 194/1249 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • What is the best way to store static data in C# that will never change

    - by Luke101
    I have a class that stores data in asp.net c# application that never changes. I really don't want to put this data in the database - I would like it to stay in the application. Here is my way to store data in the application: public class PostVoteTypeFunctions { private List<PostVoteType> postVotes = new List<PostVoteType>(); public PostVoteTypeFunctions() { PostVoteType upvote = new PostVoteType(); upvote.ID = 0; upvote.Name = "UpVote"; upvote.PointValue = PostVotePointValue.UpVote; postVotes.Add(upvote); PostVoteType downvote = new PostVoteType(); downvote.ID = 1; downvote.Name = "DownVote"; downvote.PointValue = PostVotePointValue.DownVote; postVotes.Add(downvote); PostVoteType selectanswer = new PostVoteType(); selectanswer.ID = 2; selectanswer.Name = "SelectAnswer"; selectanswer.PointValue = PostVotePointValue.SelectAnswer; postVotes.Add(selectanswer); PostVoteType favorite = new PostVoteType(); favorite.ID = 3; favorite.Name = "Favorite"; favorite.PointValue = PostVotePointValue.Favorite; postVotes.Add(favorite); PostVoteType offensive = new PostVoteType(); offensive.ID = 4; offensive.Name = "Offensive"; offensive.PointValue = PostVotePointValue.Offensive; postVotes.Add(offensive); PostVoteType spam = new PostVoteType(); spam.ID = 0; spam.Name = "Spam"; spam.PointValue = PostVotePointValue.Spam; postVotes.Add(spam); } } When the constructor is called the code above is ran. I have some functions that can query the data above too. But is this the best way to store information in asp.net? if not what would you recommend?

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • C# and Excel best practices

    - by rlp
    I am doing a lot of MS Excel interop i C# (Visual Studio 2012) using Microsoft.Office.Interop.Excel. It requires a lot of tiresome manual code to include Excel formulas, doing formatting of text and numbers, and making graphs. I would like it very much if any of you have some input on how I do the task better. I have been looking at Visual Studio Tools for Office, but I am uncertain on its functions. I get it is required to make Excel add-ins, but does it help doing Excel automation? I have desperately been trying to find information on working with Excel in Visual Studio 2012 using C#. I did found some good but short tutorials. However I really would like a book an the subject to learn the field more in depth regarding functionality and best practices. Searching Amazon with my limited knowlegde only gives me book on VSTO using older versions of Visual Studio. I would not like to use VBA. My applications use Excel mainly for visualizing compiled from different sources. I also to data processing where Excel is not required. Futhermore, I can write C# but not VB.

    Read the article

  • Best way and problems when using ajax tabs with an MVC PHP project

    - by Jonathan
    Hi, I'm building an IMDB.com like website using PHP/jQuery and a MVC approach (no OOP). I have an index.php base controller to 'rule them all' :), a controllers folder with all the controllers, a models folder and a view folder. In some pages of the website I have tabbed navigation, when the visitor clicks on one of those tabs to get more information, jQuery gets that data using the $.post or $.get method and shows it on the tab container, obviously without refreshing the page. The problem is that those pages loaded by ajax are also generated using controllers, models, and views, and the things are getting a bit complicated for someone like me ( = 'no experience'). To dynamically get the data I some times need to include a model twice, include an include in an include in an include, send information multiple times, connect with the database again, and all sort of things like that and I'm sure there is a better and prettier way to do this. I'm searching for the best approach and common methods for this. I have no experience working with a big project like this. This is a personal project so I have full control and every answer is welcome. Thanks!!!

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • What is the best (Windows) program launcher?

    - by AR
    One of the biggest general productivity boosters I've used is a good program launcher. I was a long-time user of SlickRun, and I've tried a few others. My current favorite is Executor - by far the best I've used. Other options: Executor: My current favorite Vista Start Menu: Pretty good, actually, but Executor is similar (binds to Win+Z) and much more flexible. Quicksilver: For Macs only, but it seems to be the gold standard against which most other launchers are measured. Google Desktop: Press Ctrl+Ctrl and it's a quick launcher! AutoHotKey: Much,much more than just a launcher - more than I need, really. SlickRun: simple and unobtrusive Launchy: Seems to be the launcher of choice for many StackOverflow users :) Colibri: "Type Ahead - Information at the tip of your wings". Quite a cool concept. Many, many others. Scott Hanselman outlines some more here. I realize that everyone will have their own preferences, but the question is: is there anything that really stands out in terms of speed, features, and especially productivity increase?

    Read the article

  • The best way to return related data in a SQL statement

    - by Darvis Lombardo
    I have a question on the best method to get back to a piece of data that is in a related table on the other side of a many-to-many relationship table. My first method uses joins to get back to the data, but because there are multiple matching rows in the relationship table, I had to use a TOP 1 to get a single row result. My second method uses a subquery to get the data but this just doesn't feel right. So, my question is, which is the preferred method, or is there a better method? The script needed to create the test tables, insert data, and run the two queries is below. Thanks for your advice! Darvis -------------------------------------------------------------------------------------------- -- Create Tables -------------------------------------------------------------------------------------------- DECLARE @TableA TABLE ( [A_ID] [int] IDENTITY(1,1) NOT NULL, [Description] [varchar](50) NULL) DECLARE @TableB TABLE ( [B_ID] [int] IDENTITY(1,1) NOT NULL, [A_ID] [int] NOT NULL, [Description] [varchar](50) NOT NULL) DECLARE @TableC TABLE ( [C_ID] [int] IDENTITY(1,1) NOT NULL, [Description] [varchar](50) NOT NULL) DECLARE @TableB_C TABLE ( [B_ID] [int] NOT NULL, [C_ID] [int] NOT NULL) -------------------------------------------------------------------------------------------- -- Insert Test Data -------------------------------------------------------------------------------------------- INSERT INTO @TableA VALUES('A-One') INSERT INTO @TableA VALUES('A-Two') INSERT INTO @TableA VALUES('A-Three') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-One') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-Two') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-Three') INSERT INTO @TableB (A_ID, Description) VALUES(2,'B-Four') INSERT INTO @TableB (A_ID, Description) VALUES(2,'B-Five') INSERT INTO @TableB (A_ID, Description) VALUES(3,'B-Six') INSERT INTO @TableC VALUES('C-One') INSERT INTO @TableC VALUES('C-Two') INSERT INTO @TableC VALUES('C-Three') INSERT INTO @TableB_C (B_ID, C_ID) VALUES(1, 1) INSERT INTO @TableB_C (B_ID, C_ID) VALUES(2, 1) INSERT INTO @TableB_C (B_ID, C_ID) VALUES(3, 1) -------------------------------------------------------------------------------------------- -- Get result - method 1 -------------------------------------------------------------------------------------------- SELECT TOP 1 C.*, A.Description FROM @TableC C JOIN @TableB_C BC ON BC.C_ID = C.C_ID JOIN @TableB B ON B.B_ID = BC.B_ID JOIN @TableA A ON B.A_ID = A.A_ID WHERE C.C_ID = 1 -------------------------------------------------------------------------------------------- -- Get result - method 2 -------------------------------------------------------------------------------------------- SELECT C.*, (SELECT A.Description FROM @TableA A WHERE EXISTS (SELECT * FROM @TableB_C BC JOIN @TableB B ON B.B_ID = BC.B_ID WHERE BC.C_ID = C.C_ID AND B.A_ID = A.A_ID)) FROM @TableC C WHERE C.C_ID = 1

    Read the article

  • Best way to store this data?

    - by Malfist
    I have just been assigned to renovate an old website, and I get to move it from some old archaic system to drupal. The only problem is that it's a real-estate system and a lot of data is stored. Currently all the information is stored in a single table, an id represents the house and then everything else is key/value pairs. There are a possible 243 keys per estate, there are 23840 estates in the system. As you can imagine the system is slow and difficult to query. I don't think a table with 243 rows would be a very good idea, and probably worse than the current situation. I've done some investigating and here's what I've found out: Missing data does not indicate a 0 value, data is merged from two, unique sources/formats. Some guessing is involved. I have no control over the source of the data. There are 4 keys that are common to all estates, all values look like something that is commonly searched for and could be indexed There are 10 keys that are in the [90-100)% range 8 of these are information like who's selling it, and it's address. The other two seem to belong with the below range There are 80 keys that are in the [80-90)% range This range seems to mostly just list room types and how many the house has (e.g. bedrooms_possible, bathrooms, family_room_3rd, etc) This range also includes some minor information like school districts, one or two more pieces of data on the address. The 179 keys that are in the [0-80)% range include all sorts of miscellaneous information about the estate My best idea was a hybrid approach, create a table that stores important, common information and keep a smaller key/value table. How would you store this information?

    Read the article

  • Splitting Nucleotide Sequences in JS with Regexp

    - by TEmerson
    I'm trying to split up a nucleotide sequence into amino acid strings using a regular expression. I have to start a new string at each occurrence of the string "ATG", but I don't want to actually stop the first match at the "ATG". Valid input is any ordering of a string of As, Cs, Gs, and Ts. For example, given the input string: ATGAACATAGGACATGAGGAGTCA I should get two strings: ATGAACATAGGACATGAGGAGTCA (the whole thing) and ATGAGGAGTCA (the first match of "ATG" onward). A string that contains "ATG" n times should result in n results. I thought the expression /(?:[ACGT]*)(ATG)[ACGT]*/g would work, but it doesn't. If this can't be done with a regexp it's easy enough to just write out the code for, but I always prefer an elegant solution if one is available.

    Read the article

  • Best way to store data in database when you don't know the type

    - by stiank81
    I have a table in my database that represents datafields in a custom form. The DataField gives some representation of what kind of control it should be represented with, and what value type it should take. Simplified you can say that I have 2 entities in this table - Textbox taking any string and Textbox only taking numbers. Now I have the different values stored in a separate table, referencing the datafield definition. What is the best way to store the data value here, when the type differs? One possible solution is to have the FieldValue table hold one field per possible value type. Now this would certainly be redundant, but at least I would get the value stored in its correct form - simplifying queries later. FieldValue ---------- Id DataFieldId IntValue DoubleValue BoolValue DataValue .. Another possibility is just storing everything as String, and casting this in the queries. I am using .Net with NHibernate, and I see that at least here there is a Projections.Cast that can be used to cast e.g. string to int in the query. Either way in these two solutions I need to know which type to use when doing the query, but I will know that from the DataField, so that won't be a problem. Anyway; I don't think any of these solutions sounds good. Are they? Or is there a better way?

    Read the article

  • C++: Shortest and best way to "reinitialize"/clean a class instance

    - by Oper
    I will keep it short and just show you a code example: class myClass { public: myClass(); int a; int b; int c; } // In the myClass.cpp or whatever myClass::myClass( ) { a = 0; b = 0; c = 0; } Okay. If I know have an instance of myClass and set some random garbage to a, b and c. What is the best way to reset them all to the state after the class constructor was called, so: 0, 0 and 0? I came up with this way: myClass emptyInstance; myUsedInstance = emptyInstance; // Ewww.. code smell? Or.. myUsedInstance.a = 0; myUsedInstance.c = 0; myUsedInstance.c = 0; I think you know what I want, is there any better way to achieve this?

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • What's the best Scala build system?

    - by gatoatigrado
    I've seen questions about IDE's here -- Which is the best IDE for Scala development? and What is the current state of tooling for Scala?, but I've had mixed experiences with IDEs. Right now, I'm using the Eclipse IDE with the automatic workspace refresh option, and KDE 4's Kate as my text editor. Here are some of the problems I'd like to solve: use my own editor IDEs are really geared at everyone using their components. I like Kate better, but the refresh system is very annoying (it doesn't use inotify, rather, maybe a 10s polling interval). The reason I don't use the built-in text editor is because broken auto-complete functionalities cause the IDE to hang for maybe 10s. rebuild only modified files The Eclipse build system is broken. It doesn't know when to rebuild classes. I find myself almost half of the time going to project-clean. Worse, it seems even after it has finished building my project, a few minutes later it will pop up with some bizarre error (edit - these errors appear to be things that were previously solved with a project clean, but then come back up...). Finally, setting "Preferences / Continue launch if project contains errors" to "prompt" seems to have no effect for Scala projects (i.e. it always launches even if there are errors). build customization I can use the "nightly" release, but I'll want to modify and use my own Scala builds, not the compiler that's built into the IDE's plugin. It would also be nice to pass [e.g.] -Xprint:jvm to the compiler (to print out lowered code). fast compiling Though Eclipse doesn't always build right, it does seem snappy -- even more so than fsc. I looked at Ant and Maven, though haven't employed either yet (I'll also need to spend time solving #3 and #4). I wanted to see if anyone has other suggestions before I spend time getting a suboptimal build system working. Thanks in advance! UPDATE - I'm now using Maven, passing a project as a compiler plugin to it. It seems fast enough; I'm not sure what kind of jar caching Maven does. A current repository for Scala 2.8.0 is available [link]. The archetypes are very cool, and cross-platform support seems very good. However, about compile issues, I'm not sure if fsc is actually fixed, or my project is stable enough (e.g. class names aren't changing) -- running it manually doesn't bother me as much. If you'd like to see an example, feel free to browse the pom.xml files I'm using [github]. UPDATE 2 - from benchmarks I've seen, Daniel Spiewak is right that buildr's faster than Maven (and, if one is doing incremental changes, Maven's 10 second latency gets annoying), so if one can craft a compatible build file, then it's probably worth it...

    Read the article

  • Best of both worlds: browser and desktop game?

    - by Ricket
    When considering a platform for a game, I've decided on multi-platform (Win/Lin/Mac) but can't make up my mind as far as browser vs. desktop. As I'm not all too far in development, and now having second thoughts, I'd like your opinion! Browser-based games using Java applets: market penetration is reasonably high (for version 6, it's somewhere around 60% I believe?) using JOGL, 3D performance/quality is decent; certainly good enough to render the crappy 3D graphics that I make there's the (small?) possibility of porting something to Android great for an audience of gamers who switch computers often; can sit down at any computer, load a webpage and play it also great for casual gamers or less knowledgeable gamers who are quite happy with playing games in a browser but don't want to install more things to their computer written in a high-level language which I am more familiar with than C++ - but at the same time, I would like to improve my skills with C++ as it is probably where I am headed in the game industry once I get out of school... easier update process: reload the page. Desktop games using good ol' C++ and OpenGL 100% market penetration, assuming complete cross-platform; however, that number reduces when you consider how many people will go through downloading and installing an executable compared to just browsing to a webpage and hitting "yes" to a security warning. more trouble to maintain the cross-platform; but again, for learning purposes I would embrace the challenge and the knowledge I would gain better performance all around true full screen, whereas browser games often struggle with smooth full screen graphics (especially on Linux, in my experience) can take advantage of distribution platforms such as Steam more likely to be considered a "real" game, whereas browser and Java games are often dismissed as not being real games and therefore not played by "hardcore gamers" installer can be large; don't have to worry so much about download times Is there a way to have the best of both worlds? I love Java applets, but I also really like the reasons to write a desktop game. I don't want to constantly port everything between a Java applet project and a C++ project; that would be twice the work! Unity chose to write their own web player plugin. I don't like this, because I am one of the people that will not install their web player for anything, and I don't see myself being able to convince my audience to install a browser plugin. What are my options? Are there other examples out there besides Unity, of games that have browser and desktop versions? Did I leave out anything in the pro/con lists above?

    Read the article

  • Best practices for displaying large number of images as thumbnails in c#

    - by andySF
    I got to a point where it's very difficult to get answers by debugging and tracing object, so i need some help. What I'm trying to do: A history form for my screen capture pet project. The history must list all images as thumbnails (ex: picasa). What I've done: I created a HistoryItem:UserControl. This history item has a few buttons, a check box, a label and a picture box. The buttons are for delete/edit/copy image. The check box is used for selecting one or more images and the label is for some info text. The picture box is getting the image from a public property that is a path and a method creates a proportional thumbnail to display it when the control has been loaded. This user control has two public events. One for deleting the image and one for bubbling the events for mouse enter and mouse leave trough all controls. For this I use EventBroadcastProvider. The bubbling is useful because wherever I move the mouse over the control, the buttons appear. The dispose method has been extended and I manually remove the events. All images are loaded by looping a xml file that contains the path of all images. For each image in this XML I create a new HitoryItem that is added (after a little coding to sort and limit the amount of images loaded) to a flow layout panel. The problem: When I lunch the history form, and the flow layout panel is populated with my HistoryItem custom control, my memory usage increases drastically.From 14Mb to around 100MB with 100 images loaded. By closing the history form and disposing whatever I could dispose and even trying to call GC.Collect() the memory increase remain. I search for any object that could not be disposed properly like an image or event but wherever I used them they are disposed. The problem seams to be from multiple sources. One is that the events for bubbling are not disposing properly, and the other is from the picture box itself. All of this i could see by commenting all the code to a limited version when only the custom control without any image processing and even events is loaded. Without the events the memory consumption is reduced by axiomatically 20%. So my real question is if this logic, flow layout panels and custom controls with picture boxes, is the best solution for displaying large amounts of images as thumbnails. Thank you!

    Read the article

  • What is the best way to create HTML in C# code?

    - by Rodney
    I have a belief that markup should remain in mark-up and not in the code behind. I've come to a situation where I think it is acceptable to build the HTML in the code behind. I'd like to have some consensus as to what the best practices are or should be. When is it acceptable to build html in the code behind? What is the best method to create this html? (example: Strings, StringBuilder, HTMLWriter, etc)

    Read the article

  • What's the best Open Source code you've ever seen?

    - by Andrew Theken
    Part of the value of Open Source is to provide great example code to people getting started with a new platform or language. What's the best Open Source code you've encountered, and why do you like your choice? Any language will do, but I'm particularly interested in the best examples of Objective-C you can point out. Obviously this is an open-ended question, so I'll leave the question open for a while and see what kinds of answers we get. Thanks!

    Read the article

  • Unit test: How best to provide an XML input?

    - by TheSilverBullet
    I need to write a unit test which validates the serialization of two attributes of an XML(size ~ 30 KB) file. What is the best way to provide an input for this test? Here are the options I have considered: Add the file to the project and use a file reader Pass the contents of the XML as a string Create the XML through a program and pass it Which is my best option and why? If there is another way which you think is better, I would love to hear it.

    Read the article

  • BonitaSoft lauréat des OW2 Best Project Awards, un prix qui clôture une année 2012 excellente pour le BPM open-source français

    BonitaSoft lauréat des OW2 Best Project Awards 2012 Un prix qui clôture une année 2012 excellente pour le BPM open-source français OW2, la communauté internationale pour le middleware et les plates-formes applicatives open source, a annoncé les résultats de ses premiers OW2 Best Project Awards. Parmi les quatre projets open source innovants primés cette année, on retrouve dans la catégorie « Market » le BPM français Bonita Open Solution. Un prix qui vient clôturer une année pleine de succès pour l'éditeur français (lire par ailleurs « BonitaSoft dépasse le million et demi de téléchargements »).

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

  • Best way to add an extra (nested) form in the middle of a tabbed form

    - by Scharrels
    I've got a web application, consisting mainly of a big form with information. The form is split into multiple tabs, to make it more readable for the user: <form> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2">A big table with a lot of input rows</div> </div> </form> The form is dynamically extended (extra rows are added to the tables). Every 10 seconds the form is serialized and synchronized with the server. I now want to add an interactive form on one of the tabs: when a user enters a name in a field, this information is sent to the server and an id associated with that name is returned. This id is used as an identifier for some dynamically added form fields. A quick sketchup of such a page would look like this: <form action="bigform.php"> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2"> <div class="associatedinfo"> <p>Information for Joe</p> <ul> <li><input name="associated[26][]" /></li> <li><input name="associated[26][]" /></li> </ul> </div> <div class="associatedinfo"> <p>Information for Jill</p> <ul> <li><input name="associated[12][]" /></li> <li><input name="associated[12][]" /></li> </ul> </div> <div id="newperson"> <form action="newform.php"> <p>Add another person:</p> <input name="extra" /><input type="submit" value="Add" /> </form> </div> </div> </div> </form> The above will not work: nested forms are not allowed in HTML. However, I really need to display the form on that tab: it's part of the functionality of that page. I also want the behaviour of a separate form: when the user hits return in the form field, the "Add" submit button is pressed and a submit action is triggered on the partial form. What is the best way to solve this problem?

    Read the article

  • Looking for best practise for writing a serial device communication app in C#

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.serialport, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connectio to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data recieved, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the gloabl variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flakey.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the datarecieved method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe readline will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

  • Looking for best practise for writing a serial device communication app

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.SerialPort, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connection to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data received, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the global variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flaky.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the data received method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe ReadLine will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >