Search Results

Search found 11195 results on 448 pages for 'disconnected environment'.

Page 392/448 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • PHP-How to choose XML section based on an attribute?

    - by Vincent
    All, I have a config xml file in the following format: <?xml version="1.0"?> <configdata> <development> <siteTitle>You are doing Development</siteTitle> </development> <test extends="development"> <siteTitle>You are doing Testing</siteTitle> </test> <production extends="development"> <siteTitle>You are in Production</siteTitle> </production> </configdata> To read this config file to apply environment settings, currently I am using, the following code in index.php file: $appEnvironment = "production"; $config = new Zend_Config_Xml('/config/settings.xml', $appEnvironment ); To deploy this code on multiple environments, as user has to change index.php file. Instead of doing that, is it possible to maintain an attribute in the xml file, "say active=true". Based on which the Zend_Config_Xml will know which section of the xml file settings to read? Thanks

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • How to skip certain tests with Test::Unit

    - by Daniel Abrahamsson
    In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite. My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test. The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality. As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

    Read the article

  • Time out when creating a site collection

    - by Daeko
    I am trying to create a site collection programmatically. It has worked for about 6 months, but after the servers have been updated (various patches) it doesn’t work anymore (we have 3 servers: 1 development, 1 test, 1 production). It is still working in my development environment which hasn’t been updated, but not on the two others. I don’t receive any error messages, it just hangs at the code that is supposed to add the site collection (see code below). I am using Windows Server 2003 R2 and Sharepoint 2007 (version 12.0.0.6421 ). It doesn’t give me any errors, it just hangs until Internet Explorer comes with a “request timed out” response. If I try and debug the code, the code just stops there and nothing happens. No error messages or anything. public static string CreateSPAccountSite(string siteName) { string url = ""; SPSecurity.RunWithElevatedPrivileges(delegate() { SPWeb web = SPContext.Current.Web; using (SPSite siteCollectionOuter = new SPSite(web.Site.ID)) { SPWebApplication webApp = siteCollectionOuter.WebApplication; SPSiteCollection siteCollection = webApp.Sites; SPSite site = siteCollection.Add("sites/" + siteName, siteName, "Auto generated Site collection.", 1033, "STS#0", siteCollectionOuter.Owner.LoginName, siteCollectionOuter.Owner.Name, siteCollectionOuter.Owner.Email); //Hangs here site.PortalName = "Portal"; site.PortalUrl = mainUrl; // https://www.ourdomain.net url = site.Url; } }); return url; //Should be "https://www.outdomain.net/sites/siteName" }

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • ASP.NET: how can I compile in DEBUG mode?

    - by Budda
    AFAIK, usual ASP.NET web site/web application switched on into DEBUG mode when web/app-config setting "compilation" has debug="false". But I don't clearly understand how it works. Let's consider example: I have <compilation debug="true" />. I've added the following line into "Page_Load" method: System.Diagnostics.Debug.WriteLine("Page_Load"); When I launched web-site in 'debug' mode (using F5 button) It put me 'Page_Load' into output window. Everything is ok so far. When I change application compilation mode into non-debug: Will it recompile everything for 'non-debug' mode? Is this enough to go into "production" environment with this change only? I guess, it should be enough for web-site that doesn't use other project. Otherwise, I would better switch whole configuration into "Release" mode. In this case EACH project will be recompiled in "Release" mode. Am I right? Could you please point me if something is wrong here? Thanks a lot!

    Read the article

  • What rules govern cross-version compatibility for .NET applications and the C# language?

    - by John Feminella
    For some reason I've always had trouble remembering the backwards/forwards compatibility guarantees made by the framework, so I'd like to put that to bed forever. Suppose I have two assemblies, A and B. A is older and references .NET 2.0 assemblies; B references .NET 3.5 assemblies. I have the source for A and B, Ax and Bx, respectively; they are written in C# at the 2.0 and 3.0 language levels. (That is, Ax uses no features that were introduced later than C# 2.0; likewise Bx uses no features that were introduced later than 3.0.) I have two environments, C and D. C has the .NET 2.0 framework installed; D has the .NET 3.5 framework installed. Now, which of the following can/can't I do? Running: run A on C? run A on D? run B on C? run C on D? Compiling: compile Ax on C? compile Ax on D? compile Bx on C? compile Bx on D? Rewriting: rewrite Ax to use features from the C# 3 language level, and compile it on D, while having it still work on C? rewrite Bx to use features from the C# 4 language level on another environment E that has .NET 4, while having it still work on D?' Referencing from another assembly: reference B from A and have a client app on C use it? reference B from A and have a client app on D use it? reference A from B and have a client app on C use it? reference A from B and have a client app on D use it? More importantly, what rules govern the truth or falsity of these hypothetical scenarios?

    Read the article

  • SQL Scenario of allocating ids to user

    - by Enjoy coding
    Hi, I have an sql scenario as follows which I have been trying to improve. There is a table 'Returns' which is having ids of the returned goods against a shop for an item. Its structure is as below. Returns ------------------------- Return ID | Shop | Item ------------------------- 1 Shop1 Item1 2 Shop1 Item1 3 Shop1 Item1 4 Shop1 Item1 5 Shop1 Item1 There is one more table Supplier with Shop, supplier and Item as shown below. Supplier --------------------------------- Supplier | Shop | Item | Volume --------------------------------- supp1 Shop1 Item1 20% supp2 Shop1 Item1 80% Now as you see supp1 is supplying 20 % of total item1 volume and supp2 is supplying 80% of Item1 to shop1. And there were 5 return of items against the same Item1 for same Shop1. Now I need to allocate any four return IDs to Supp1 and remaining one return Id to supp2. This allocation of numbers is based on the ratio of the supplied volume percentage of the supplier. This allocation varies depending on the ratio of volume of supplied items. Now I have tried a method of using RANKs as shown below by use of temp tables. temp table 1 will have Shop, Return Id, Item, Total count of return IDs and Rank of the return id. temp table 2 will have shop, Supplier, Item and his proportion and rank of proportion. Now I am facing the difficulty in allocating top return ids to top supplier as illustrated above. As SQL doesnt have loops how can I achieve this. I have been tying several ways of doing this. Please advice. My environment is Teradata (ANSI SQL is enough). Thanks in advance.

    Read the article

  • How to avoid concurrent execution of a time-consuming task without blocking?

    - by Diego V
    I want to efficiently avoid concurrent execution of a time-consuming task in a heavily multi-threaded environment without making threads wait for a lock when another thread is already running the task. Instead, in that scenario, I want them to gracefully fail (i.e. skip its attempt to execute the task) as fast as possible. To illustrate the idea considerer this unsafe (has race condition!) code: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing running = true; try { runExpensiveTask(); } finally { running = false; } } I though about using a variation of Double-Checked Locking (consider that running is a primitive 32-bit field, hence atomic, it could work fine even for Java below 5 without the need of volatile). It could look like this: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing synchronized (ThisClass.class) { if (running) return; running = true; try { runExpensiveTask(); } finally { running = false; } } } Maybe I should also use a local copy of the field as well (not sure now, please tell me). But then I realized that anyway I will end with an inner synchronization block, that still could hold a thread with the right timing at monitor entrance until the original executor leaves the critical section (I know the odds usually are minimal but in this case we are thinking in several threads competing for this long-running resource). So, could you think in a better approach?

    Read the article

  • What could the negative effects be of attaching to a process as a debugger?

    - by I_like_traffic_lights
    Background A client of mine has a major problem. They have a CRM system, which was created by a single person over a period of 9 years. Unfortunatelly, a few weeks ago, this person died. I believe the company has learned their lesson, and they have started a project of rewriting the CRM system to a modern platform. I have been hired to create a solution in the meantime to make adaptations to the CRM system. I have given up understanding the code, as this would take too long. My solution, is therefore, to make a window and show this on top of the CRM system, whenever this CRM system is showing. This part works fine, but my major problem is extracting the data from the CRM system. Proposed solution After excluding 6 approaches, including runtime code injection, memory searching, database integration, I have arrived at attaching to the process as a debugger, so I get notified about event, and use this in combination with reading from process memory. This approach seems to work, but I am worried about possible side-effects of this approach. Question What are the dangers of using this in a production environment, where there are 250 employees utilizing the system. Needless to say, I cannot risk reducing the already shaky stability of the system.

    Read the article

  • Is it a wise decision to go from dev to third line/Tier 3 dev support?

    - by dotnetdev
    Hi, I am an experienced, mid-level developer. However, I recently spotted a job for a company which is small but has a lot of emphasis on training(beyond the basic technical training, but also mentoring, leadership training, etc). The role is 3rd line so still very technical. It's in app support so it's post implementation development rather than pure out-and-out development like I do now (or don't, as the senior devs do all of the interesting work). However, and this is the question - is this sort of career move common? Also, wouldn't a tech support role be a big shock to the system because I've never dealt with customers? I therefore think it's a bad move? Working in dev, I am used to the lack of customer contact and it is all filtered through by my manager. But in tech support, contacting customers/rude customers could be scary. I don't mind fixing other people's mistakes (better than me making mistakes!) and doing post-implementation dev for production systems (will give me a lot of discipline), and I do get bored sitting in the same place looking/talking to the same people in suits (I work in a corporate environment). The company puts A LOT of emphasis on training and prospects, which I don't get in the current (big) company I work at. Any advice on how to handle tech support is appreciated! Thanks

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • How can I change the default startup directory for cmd.exe?

    - by Nano HE
    Hi. My Procedure last day as below Click Start, Run and type Regedit.exe Navigate to the following branch: HKEY_CURRENT_USER \ Software \ Microsoft \ Command Processor In the right-pane, double-click Autorun and set the startup folder path as its data, preceded by “CD /d “. If Autorun value is missing, you need to create one, of type REG_EXPAND_SZ or REG_SZ in the above location. Example: To set the startup directory to D:\learning\perl, set the Autorun value data to CD /d D:\learning\perl Then I clicked Start, run and type cmd. It successfully. I could do perl practice more conveniently now. But today, I find when I try to build my Visual Studio 2005 solution which included some Pre-build event Command like this: perl.exe MyAppVersion.pl perl.exe AttrScan.pl It doesn't work. Show error: can't find the path. I check the environment variable setting and find the variable-path and it's value-c:\perl\bin\; still exist. Finially, I try to removed the Regedit.exe configuration "Autorun" value and test again. The issue fixed. I only changed the default startup directory for cmd.exe command. Why the pre-build event perl command was impacted? (I am using winxp and activePerl 5.8)

    Read the article

  • Why does Perl lose foreign characters on Windows; can this be fixed (if so, how)?

    - by Alex R
    Note below how ã changes to a. NOTE2: Before you blame this on CMD.EXE and Windows pipe weirdness, see Experiment 2 below which gets a similar problem using File::Find. The particular problem I'm trying to fix involves working with image files stored on a local drive, and manipulating the file names which may contain foreign characters. The two experiments shown below are intermediate debugging steps. The ã character is common in latin languages. e.g. http://pt.wikipedia.org/wiki/Cão Experiment 1 Experiment 2 To get around my particular problem, I tried using File::Find instead of piped input. The issue actually gets worse: Debugging update: I tried some of the tricks listed at http://perldoc.perl.org/perlunicode.html, e.g. use utf8, use feature 'unicode_strings', etc, to no avail. Environment and Version Info The OS is Windows 7, 64-bit. The Perl is: This is perl 5, version 12, subversion 2 (v5.12.2) built for MSWin32-x64-multi-thread (with 8 registered patches, see perl -V for more detail) Copyright 1987-2010, Larry Wall Binary build 1202 [293621] provided by ActiveState http://www.ActiveState.com Built Sep 6 2010 22:53:42

    Read the article

  • Help me convert C# 1.1 Xml validation code to C# 2.0 please.

    - by Hamish Grubijan
    It would be fantastic if you could help me rid of these warnings below. I have not been able to find a good document. Since the warnings are concentrated in just the private void ValidateConfiguration( XmlNode section ) section, hopefully this is not terribly hard to answer, if you have encountered this before. Thanks! 'System.Configuration.ConfigurationException.ConfigurationException(string)' is obsolete: 'This class is obsolete, to create a new exception create a System.Configuration!System.Configuration.ConfigurationErrorsException' 'System.Xml.XmlValidatingReader' is obsolete: 'Use XmlReader created by XmlReader.Create() method using appropriate XmlReaderSettings instead. http://go.microsoft.com/fwlink/?linkid=14202' private void ValidateConfiguration( XmlNode section ) { // throw if there is no configuration node. if( null == section ) { throw new ConfigurationException("The configuration section passed within the ... class was null ... there must be a configuration file defined.", section ); } //Validate the document using a schema XmlValidatingReader vreader = new XmlValidatingReader( new XmlTextReader( new StringReader( section.OuterXml ) ) ); // open stream on Resources; the XSD is set as an "embedded resource" so Resource can open a stream on it using (Stream xsdFile = XYZ.GetStream("ABC.xsd")) using (StreamReader sr = new StreamReader(xsdFile)) { vreader.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack); vreader.Schemas.Add(XmlSchema.Read(new XmlTextReader(sr), null)); vreader.ValidationType = ValidationType.Schema; // Validate the document while (vreader.Read()) { } if (!_isValidDocument) { _schemaErrors = _sb.ToString(); throw new ConfigurationException("XML Document not valid"); } } } // Does not cause warnings. private void ValidationCallBack( object sender, ValidationEventArgs args ) { // check what KIND of problem the schema validation reader has; // on FX 1.0, it gives a warning for "<xs:any...skip" sections. Don't worry about those, only set validation false // for real errors if( args.Severity == XmlSeverityType.Error ) { _isValidDocument = false; _sb.Append( args.Message + Environment.NewLine ); } }

    Read the article

  • How to update attributes without validation

    - by Brian Roisentul
    I've got a model with its validations, and I found out that I can't update an attribute without validating the object before. I already tried to add on => :create syntax at the end of each validation line, but I got the same results. My announcement model have the following validations: validates_presence_of :title validates_presence_of :description validates_presence_of :announcement_type_id validate :validates_publication_date validate :validates_start_date validate :validates_start_end_dates validate :validates_category validate :validates_province validates_length_of :title, :in => 6..255, :on => :save validates_length_of :subtitle, :in => 0..255, :on => :save validates_length_of :subtitle, :in => 0..255, :on => :save validates_length_of :place, :in => 0..50, :on => :save validates_numericality_of :vacants, :greater_than_or_equal_to => 0, :only_integer => true validates_numericality_of :price, :greater_than_or_equal_to => 0, :only_integer => true My rake task does the following: task :announcements_expiration => :environment do announcements = Announcement.expired announcements.each do |a| #Gets the user that owns the announcement user = User.find(a.user_id) puts a.title + '...' a.state = 'deactivated' if a.update_attributes(:state => a.state) puts 'state changed to deactivated' else a.errors.each do |e| puts e end end end This throws all the validation exceptions for that model, in the output. Does anybody how to update an attribute without validating the model?

    Read the article

  • How to Store and Retrieve Images Using SQL Server (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into a SQL Server database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the SQL Server environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated.

    Read the article

  • Most Efficient way to deal with multiple CCSprites?

    - by nardsurfer
    I have four different types of objects within my environment(box2d), each type of object having multiple instances of itself, and would like to find the most efficient way to deal with adding and manipulating all the CCSprites. The sprites are all from different files, so would it be best to create each sprite and add it to a data structure (NSMutableArray) or would I use a CCSpriteBatchNode even though each CCSprite file is different (for each type of object)? Thanks. @interface LevelScene : CCLayer { b2World* world; GLESDebugDraw *m_debugDraw; CCSpriteBatchNode *ballBatch; CCSpriteBatchNode *blockBatch; CCSpriteBatchNode *springBatch; CCSprite *goal; } +(id) scene; // adds a new sprite at a given coordinate -(void) addNewBallWithCoords:(CGPoint)p; // loads the objects (blocks, springs, and the goal), returns the Level Object -(Level) loadLevel:(int)level; @end or using NSMutableArray objects within the Level object... @interface zLevel : zThing { NSMutableArray *springs; NSMutableArray *blocks; NSMutableArray *balls; zGoal *goal; int levelNumber; } @property(nonatomic,retain)NSMutableArray *springs; @property(nonatomic,retain)NSMutableArray *blocks; @property(nonatomic,retain)NSMutableArray *balls; @property(nonatomic,retain)zGoal *goal; @property(nonatomic,assign)int levelNumber; -(void)initWithLevel:(int)level; -(void)loadLevelThings; @end

    Read the article

  • How to put data from List<string []> to dataGridView

    - by Kirill
    Try to put some data from List to dataGridView, but have some problem with it. Currently have method, that return me required List - please see picture below code public List<string[]> ReadFromFileBooks() { List<string> myIdCollection = new List<string>(); List<string[]> resultColl = new List<string[]>(); if (chooise == "all") { if (File.Exists(filePath)) { using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { StreamReader sr = new StreamReader(fs); string[] line = sr.ReadToEnd().Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); foreach (string l in line) { string[] result = l.Split(','); foreach (string element in result) { myIdCollection.Add(element); } resultColl.Add(new string[] { myIdCollection[0], myIdCollection[1], myIdCollection[2], myIdCollection[3] }); myIdCollection.Clear(); } sr.Close(); return resultColl; } } .... this return to me required data in requred form (like list from arrays). After this, try to move it to the dataGridView, that already have 4 columns with names (because i'm sure, that no than 4 colums required) - please see pic below Try to put data in to dataGridView using next code private void radioButtonViewAll_CheckedChanged(object sender, EventArgs e) { TxtLibrary myList = new TxtLibrary(filePathBooks); myList.chooise = "all"; //myList.ReadFromFileBooks(); DataTable table = new DataTable(); foreach (var array in myList.ReadFromFileBooks()) { table.Rows.Add(array); } dataGridViewLibrary.DataSource = table; } But as result got error - "required more rows that exist in dataGridVIew", but accordint to what I'm see (pic above) q-ty of rows (4) equal q-ty of arrays element in List (4). Try to check result by putting additional temp variables - but it's ok - please see pic below Where I'm wrong? Maybe i use dataGridView not in correct way?

    Read the article

  • wget not completely processing the http call

    - by user578458
    Here is a wget command that executes a HTML / PHP stack report suite that is hosted by a third party - we don't have control over the PHP or HTML page wget --no-check-certificate --http-user=/myacc --http-password=mypass -O /tmp/myoutput.csv "https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" All the elements are working perfectly: --http-user / --http-pass as offered by a browsers standard popup for username and password prompt -O /tmp/myoutput.csv - the output file of interest https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" The file generated on the fly by the parameters myrepcode=9999 - a reference to the report in question action=exportcsv internally written in the function admin=myappuserid the third party operats SSL to access the site - then internal username and password stored in a database to access the functions of the site) password=myappuserpass startdate=2011-01-16 this and end data are parameters specific to the report 9999 enddate=2011-01-16 reportby=mypreferredview This is an option in the report that facilitates different levels of detail or aggregation The problem is that the reportby parameter is a radio button selection in a list of 5 selections (sure I enough the default is highest level of aggregation , I want the last one which is the most detailed) Here is a sample of the HTML page code for the options of reportby View by The Default My Least Preferred My Second Least Preferred My Third Least Preferred My Preferred No matter which of the reportby items I select in the wget statement - thedefault is always executed. Questions 1) Has anyone come across this notation in HTML (id=inputname[inputelement]) I spoke to a senior web developer and he has never seen this notation for inputs (id=inputname[inputelement]) - and w3schools do not appear familiar with this either based on an extensive search 2) Can a wget command select a none default radio item when executing the command ? This probably will be initially received with a "Use CURL" response- however the wget approach works very well in the limited environment I am operating in - particularly as I need to download 10000 of these such items. Thanks ahead of response

    Read the article

  • Why use Entity Framework over Linq2SQL if...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason that I should use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Team Foundation Server 2010 and Offline development?

    - by Bobby Ortiz
    Did Microsoft add anything to improve offline development? I'm comparing TFS with Mercurial. Edit #1: Work Environment Details 20 Developers. 1 location. TFS 2005 is already installed, but only being used by 4 developers. Those that use TFS, are only using it for Source Control Others using VSS. :( Many small projects (Over 50 projects active) Project Team size: 1 to 3 Several employees work from home one day a week, but have VPN access There is a group of our devs that have never used TFS that are still on VSS. They are the ones pushing use to jump ship to Mercurial. Mercurial offline features is one reason they prefer it. Another reason is they just associate TFS with VSS regardless of my assertions to the contrary. We do use FogBugz and everyone agrees that it is great! This kind of excited our love for NON Microsoft products that our MUCH lighter. I don't think it is worth it.

    Read the article

  • Remove php extension from URL on Windows hosting account using web.config

    - by asprin
    I've searched before asking this question. The answered ones were related to Linux hosting account and the ones with Windows hosting account didn't match what I was looking for. As you might have guessed, I've a Windows shared hosting account with godaddy. My aim was to remove the '.php' extension from the url. After researching I found that .htaccess would do exactly what I want. But I also found that .htaccess doesn't work in Windows environment and that I'll need a web.config file to do the same task. Now I know there are modules through which the code can be generated, but the problem is I don't know how to get them installed on my hosting account. I don't want to go through the process of contacting the people over at godaddy and hence I'm looking to solve this on my own. What I'm looking for is a web.config equivalent of .htaccess This is what I'm trying to achieve: Current URL : www.abcdef.com/contact.php Desired URL : www.abcdef.com/contact Any help would be greatly appreciated. Thanks, Nisar.

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >