Search Results

Search found 8219 results on 329 pages for 'less'.

Page 244/329 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • FIFOs implementation

    - by nunos
    Consider the following code: writer.c mkfifo("/tmp/myfifo", 0660); int fd = open("/tmp/myfifo", O_WRONLY); char *foo, *bar; ... write(fd, foo, strlen(foo)*sizeof(char)); write(fd, bar, strlen(bar)*sizeof(char)); reader.c int fd = open("/tmp/myfifo", O_RDONLY); char buf[100]; read(fd, buf, ??); My question is: Since it's not know before hand how many bytes will foo and bar have, how can I know how many bytes to read from reader.c? Because if I, for example, read 10 bytes in reader and foo and bar are together less than 10 bytes, I will have them both in the same variable and that I do not want. Ideally I would have one read function for every variable, but again I don't know before hand how many bytes will the data have. I thought about adding another write instruction in writer.c between the write for foo and bar with a separator and then I would have no problem decoding it from reader.c. Is this the way to go about it? Thanks.

    Read the article

  • How Can I: Generate 40/64 Bit WEP Key In Python?

    - by Aktariel
    So, I've been beating my head against the wall of this issue for several months now, partly because it's a side interest and partly because I suck at programming. I've searched and researched all across the web, but have not had any luck (except one small bit of success; see below), so I thought I might try asking the experts. What I am trying to do is, as the title suggests, generate a 40/64 bit WEP key from a passphrase, according to the "de facto" standard. (A site such as [http://www.powerdog.com/wepkey.cgi] produces the expected outputs.) I have already written portions of the script that take inputs and write them to a file; one of the inputs would be the passphrase, sanitized to lower case. For the longest time I had no idea what the defacto standard was, much less how to even go about implementing it. I finally stumbled across a paper (http://www.lava.net/~newsham/wlan/WEP_password_cracker.pdf) that sheds as much light as I've had yet on the issue (page 18 has the relevant bits). Apparently, the passphrase is "mapped to a 32-bit value with XOR," the result of which is then used as the seed for a "linear congruential PRNG (which one of the several PRNGs Python has would fit this description, I don't know), and then from that result several bits of the result are taken. I have no idea how to go about implementing this, since the description is rather vague. What I need is help in writing the generator in Python, and also in understanding how exactly the key is generated. I'm not much of a programmer, so explanations are appreciated as well. (Yes, I know that WEP isn't secure.)

    Read the article

  • DataTable won't DataBind with a DataTable.NewRow()

    - by David
    Is DataRow.NewRow() insufficient as the only row in a DataTable? I would expect this to work, but it doesn't. It's near the end of my Page_Load inside my If(!Postback) block. gridCPCP is GridView DataTable dt = new DataTable(); dt.Columns.Add("ID", int.MinValue.GetType()); dt.Columns.Add("Code", string.Empty.GetType()); dt.Columns.Add("Date", DateTime.MinValue.GetType()); dt.Columns.Add("Date2", DateTime.MinValue.GetType()); dt.Columns.Add("Filename", string.Empty.GetType()); //code to add rows if (dt.Rows.Count > 0) { gridCPCP.DataSource = dt; gridCPCP.DataBind(); } else { dt.Rows.Add(dt.NewRow()); gridCPCP.DataSource = dt; gridCPCP.DataBind(); //EXCEPTION int TotalColumns = gridCPCP.Rows[0].Cells.Count; gridCPCP.Rows[0].Cells.Clear(); gridCPCP.Rows[0].Cells.Add(new TableCell()); gridCPCP.Rows[0].Cells[0].ColumnSpan = TotalColumns; gridCPCP.Rows[0].Cells[0].Text = "No Record Found"; } The exception throws on gridCPCP.DataBind() and only when execution reaches the else block. If there were rows added above via dt.Rows.Add(new object[] { ... } binding works. System.ArgumentOutOfRangeException: Length cannot be less than zero. Parameter name: length

    Read the article

  • Javascript doesnt update

    - by Trikam
    Hi all, I have a function that passes a parameter which is a function call and then i use setTimeout to call this passed function call. now i tried two methods with setTimout to raise the event and i used function.call(). When this passed parameter function call was raised none of the javascript was being updated, below is the javascript which im using to raise the event and the javascript which is supposed to be updated: The function being passed is [context] - function() { ErrorMessageFileSelect('diverrortextchoosechannal','The file chosen is to big, you must choose a file less than 1MB'); } function FileSizeOnLoad(contentLength,context) { if (context != null) { // context.call(); setTimeout(context,0); // or context.call(); } else { $('#inputHiddenFileSizeField').val(contentLength); DisplayChoseFileInformation(contentLength); } } //this is where the update should happen function ErrorMessageFileSelect(className, errorMessage) { $('div.' + className).text(errorMessage); alert($('div.' + className).text()); } Is there somthing im missing, can someone help me with this issue please. Thanks

    Read the article

  • Properties vs. Fields: Need help grasping the uses of Properties over Fields.

    - by pghtech
    First off, I have read through a list of postings on this topic and I don't feel I have grasped properties because of what I had come to understand about encapsulation and field modifiers (private, public..ect). One of the main aspects of C# that I have come to learn is the importance of data protection within your code by the use of encapsulation. I 'thought' I understood that to be because of the ability of the use of the modifiers (private, public, internal, protected). However, after learning about properties I am sort of torn in understanding not only properties uses, but the overall importance/ability of data protection (what I understood as encapsulation) within C#. To be more specific, everything I have read when I got to properties in C# is that you should try to use them in place of fields when you can because of: 1) they allow you to change the data type when you can't when directly accessing the field directly. 2) they add a level of protection to data access However, from what I 'thought' I had come to know about the use of field modifiers did #2, it seemed to me that properties just generated additional code unless you had some reason to change the type (#1) - because you are (more or less) creating hidden methods to access fields as opposed to directly. Then there is the whole modifiers being able to be added to Properties which further complicates my understanding for the need of properties to access data. I have read a number of chapters from different writers on "properties" and none have really explained a good understanding of properties vs. fields vs. encapsulation (and good programming methods). Can someone explain: 1) why I would want to use properties instead of fields (especially when it appears I am just adding additional code 2) any tips on recognizing the use of properties and not seeing them as simply methods (with the exception of the get;set being apparent) when tracing other peoples code? 3) Any general rules of thumb when it comes to good programming methods in relation to when to use what? Thanks and sorry for the long post - I didn't want to just ask a question that has been asked 100x without explaining why I am asking it again.

    Read the article

  • [Python] name 'OptionGroup' is not defined

    - by Cawas
    Ok, so I made this rookie mistake below, but in my defense I was led to it thanks to how the help about this subject is on python docs, which states how to use optparse. It is actually an error under the gigantic tutorial section. In contrast and to my offense, I may be one of the very few stupid people who can't read very well and pay close attention on what I do. But since this took me so long to discover, I wanted to "document" it here: Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) >>> from optparse import OptionParser >>> outputGroup = OptionGroup(parser, 'Output handling') Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'OptionGroup' is not defined This is strictly done with examples found on the docs, and you can't find anything about it anywhere, be it that long long docs page, google or stackoverflow. Plus, reading optparse.py shows OptionGroup is there, so that adds to the confusion. I bet it will take less than 1 minute for someone to spot my error. For that I'll only add proper tags and / or modify the title later on. :)

    Read the article

  • sql query - how to apply limit within group by

    - by Raj
    hey guys assuming i have a table named t1 with following fields: ROWID, CID, PID, Score, SortKey it has the following data: 1, C1, P1, 10, 1 2, C1, P2, 20, 2 3, C1, P3, 30, 3 4, C2, P4, 20, 3 5, C2, P5, 30, 2 6, C3, P6, 10, 1 7, C3, P7, 20, 2 what query do i write so that it applies group by on CID, but instead of returning me 1 single result per group, it returns me a max of 2 results per group. also where condition is score = 20 and i want the results ordered by CID and SortKey. If I had to run my query on above data, i would expect the following result: RESULTS FOR C1 - note: ROWID 1 is not considered as its score < 20 C1, P2, 20, 2 C1, P3, 30, 3 RESULTS FOR C2 - note: ROWID 5 appears before ROWID 4 as ROWID 5 has lesser value SortKey C2, P5, 30, 2 C2, P4, 20, 3 RESULTS FOR C3 - note: ROWID 6 does not appear as its score is less than 20 so only 1 record returned here C3, P7, 20, 2 IN SHORT, I WANT A LIMIT WITHIN A GROUP BY. I want the simplest solution and want to avoid temp tables. sub queries are fine. also note i am using sqlite for this

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Excel::Shape object getting released automatically after the count reaches 18 in List<T>

    - by A9S6
    I have a Excel addin written in C# 2.0 in which I am experiencing a strange behavior.Please note that this behavior is only seen in Excel 2003 and NOT in Excel 2007 or 2010. Issue: When the user clicks an import command button, a file is read and a number of Shapes are created/added to the worksheet using Worksheet::Shapes::AddPicture() method. A reference to these Shape objects are kept in a generic list: List<Excel.Shape> list = new List<Excel.Shape>(); Everything works fine till the list has less than 18 references. When the count reaches 18, and a new Shape reference is added, the first one i.e. @ index [0] is released. I am unable to call any method or property on that reference and calling a method/property throws a COMException (0x800A1A8) i.e. Object Required. If I add one more, then the reference @ [1] is not accessible and so on. Strange enough... this happens with Shape object only i.e. If I add one Shape and then 17 nulls to the list then this wont happen until 17 more Shape objects are added. Does anyone has an idea why it happens after the count reaches 18? I thought it might be something with the List's default capacity. Something like relocating the references during which they get released so I initialized it with a capacity of 1000 but still no luck. List<Excel.Shape> list = new List<Excel.Shape>(1000); Any Idea??

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • Java OutOfMemoryError message changes when trying to create Arrays of different sizes

    - by Gordon
    In the question by DKSRathore How to simulate the Out Of memory : Requested array size exceeds VM limit some odd behavior was noted when creating an arrays. When creating an array of size Integer.MAX_VALUE an exception with the error java.lang.OutOfMemoryError Requested array size exceeds VM limit was thrown. However when an array was created with a size less than the max but still above the virtual machine memory limit the error message read java.lang.OutOfMemoryError: Java heap space. Testing further I managed to narrow down where the error messages changes. long[] l = new long[2147483645]; exceptions message reads - Requested array size exceeds VM limit long[] l = new long[2147483644]; exceptions message reads - Java heap space errors I increased my virtual machine memory and still produced the same result. Has anyone any idea why this happens? Some extra info: Integer.MAX_VALUE = 2147483647. Edit: Here's the code I used to find the value, might be helpful. int max = Integer.MAX_VALUE; boolean done = false; while (!done) { try { max--; // Throws an error long[] l = new long[max]; // Exit if an error is no longer thrown done = true; } catch (OutOfMemoryError e) { if (!e.getMessage().contains("Requested array size exceeds VM limit")) { System.out.println("Message changes at " + max); done = true; } } }

    Read the article

  • Custom Calculations in a Matrix - Reporting Services 2005

    - by bfrancis
    I am writing a report to show gas usage (in gallons) used by each department. The request is to view each month and the gallons used by each department. A column is required to display what each departments target goal is, based on the gallons of gas they have used in a past time frame. Each departments target goal is x percent less than the total gallons used for said time frame. I currently have a matrix in Reporting Services with departments making up rows, months making up columns, and gallons filling the details. The matrix is being filled by dataset1. I have the data grouping as is requested for each month by each department. My problem is calculating the target goal. My thought was to create a second dataset (dataset2) that returns the gallons used based on the time frame requested. I grouped this data by department. I was hoping I could use the department field in each dataset to make sure the appropriate numbers were used. I added a new column which shows up next to the gallons field. As I attempted to build the Expression I found out that I could only grab the gallons used from dataset2 if I was summing the gallons field. This gives me the total gallons used by every department combined. I have tried to find resources with similar examples of what I am trying to accomplish but I cannot seem to come across one. I am trying to keep this as detailed as possible without making it too wordy. I would be more than happy to clarify or explain into further detail what I have written above if it is needed. If anyone has links, comments, or suggestions they would be greatly appreciated. A very simple visual or what I am hoping to accomplish is below. The months and departments would expand based on the data returned. months ------------------------------ departments| gallons/month | target goal

    Read the article

  • CodeGolf: Find the Unique Paths

    - by st0le
    Here's a pretty simple idea, in this pastebin I've posted some pair of numbers. These represent Nodes of a directed graph. The input to stdin will be of the form, (they'll be numbers, i'll be using an example here) c d q r a b b c d e p q so x y means x is connected to y (not viceversa) There are 2 paths in that example. a->b->c->d->e and p->q->r. You need to print all the unique paths from that graph The output should be of the format a->b->c->d->e p->q->r Notes You can assume the numbers are chosen such that one path doesn't intersect the other (one node belongs to one path) The pairs are in random order. They are more than 1 paths, they can be of different lengths. All numbers are less than 1000. If you need more details, please leave a comment. I'll amend as required. Shameless-Plug For those who enjoy Codegolf, please Commit at Area51 for its very own site:) (for those who don't enjoy it, please support it as well, so we'll stay out of your way...)

    Read the article

  • Listing serial (COM) ports on Windows?

    - by Eli Bendersky
    Hello, I'm looking for a robust way to list the available serial (COM) ports on a Windows machine. There's this post about using WMI, but I would like something less .NET specific - I want to get the list of ports in a Python or a C++ program, without .NET. I currently know of two other approaches: Reading the information in the HARDWARE\\DEVICEMAP\\SERIALCOMM registry key. This looks like a great option, but is it robust? I can't find a guarantee online or in MSDN that this registry cell indeed always holds the full list of available ports. Tryint to call CreateFile on COMN with N a number from 1 to something. This isn't good enough, because some COM ports aren't named COMN. For example, some virtual COM ports created are named CSNA0, CSNB0, and so on, so I wouldn't rely on this method. Any other methods/ideas/experience to share? Edit: by the way, here's a simple Python implementation of reading the port names from registry: import _winreg as winreg import itertools def enumerate_serial_ports(): """ Uses the Win32 registry to return a iterator of serial (COM) ports existing on this computer. """ path = 'HARDWARE\\DEVICEMAP\\SERIALCOMM' try: key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, path) except WindowsError: raise IterationError for i in itertools.count(): try: val = winreg.EnumValue(key, i) yield (str(val[1]), str(val[0])) except EnvironmentError: break

    Read the article

  • python histogram one-liner

    - by mykhal
    there are many ways, how to code histogram in Python. by histogram, i mean function, counting objects in an interable, resulting in the count table (i.e. dict). e.g.: >>> L = 'abracadabra' >>> histogram(L) {'a': 5, 'b': 2, 'c': 1, 'd': 1, 'r': 2} it can be written like this: def histogram(L): d = {} for x in L: if x in d: d[x] += 1 else: d[x] = 1 return d ..however, there are much less ways, how do this in a single expression. if we had "dict comprehensions" in python, we would write: >>> { x: L.count(x) for x in set(L) } but we don't have them, so we have to write: >>> dict([(x, L.count(x)) for x in set(L)]) however, this approach may yet be readable, but is not efficient - L is walked-through multiple times, so this won't work for single-life generators.. the function should iterate well also through gen(), where: def gen(): for x in L: yield x we can go with reduce (R.I.P.): >>> reduce(lambda d,x: dict(d, x=d.get(x,0)+1), L, {}) # wrong! oops, does not work, the key name is 'x', not x :( i ended with: >>> reduce(lambda d,x: dict(d.items() + [(x, d.get(x, 0)+1)]), L, {}) (in py3k, we would have to write list(d.items()) instead of d.items(), but it's hypothethical, since there is no reduce there) please beat me with a better one-liner, more readable! ;)

    Read the article

  • How to prevent multiple registrations?

    - by GG.
    I develop a political survey website where anyone can vote once. Obviously I have to prevent multiple registrations for the survey remains relevant. Already I force every user to login with their Google, Facebook or Twitter account. But they can authenticate 3 times if they have an account on each, or authenticate with multiple accounts of the same platform (I have 3 accounts on Google). So I thought also store the IP address, but they can still go through a proxy... I thought also keep the HTTP User Agent with PHP's get_browser(), although they can still change browsers. I can extract the OS with a regex, to change OS is less easier than browsers. And there is also geolocation, for example with the Google Map API. So to summarize, several ideas: 1 / SSO Authentication (I keep the email) 2 / IP Address 3 / HTTP User Agent 4 / Geolocation with an API Have you any other ideas that I did not think? How to embed these tests? Execute in what order? Have you already deploy this kind of solution?

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • Difference between KeywordQuery, FullTextQuerySearch type for Object Model and Web service Query

    - by Raghu
    Initially I believed these 3 to be doing more or less the same thing with just the notation being different. Until recently, when i noticed that their does exists a big difference between the results of the KeyWordQuery/FullTextQuerySearch and Web service Query. I used both KeywordQuery and FullText method to search of the the value of a customColumn XYZ with value (ASDSADA-21312ASD-ASDASD):- When I run this query as:- FullTextSqlQuery:- FullTextSqlQuery myQuery = new FullTextSqlQuery(site); { // Construct query text String queryText = "Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD') "; myQuery.QueryText = queryText; myQuery.ResultTypes = ResultType.RelevantResults; }; // execute the query and load the results into a datatable ResultTableCollection queryResults = myQuery.Execute(); ResultTable resultTable = queryResults[ResultType.RelevantResults]; // Load table with results DataTable queryDataTable = new DataTable(); queryDataTable.Load(resultTable, LoadOption.OverwriteChanges); I get the following result representing the document. * Title: TestPDF * path: http://SharepointServer/Shared Documents/Forms/DispForm.aspx?ID=94 * author: null * isDocument: false Do note the Path and isDocument fields of the above result. Web Service Method Then I tried a Web Service Query method. I used Sharepoint Search Service Tool available at http://sharepointsearchserv.codeplex.com/ and ran the same query i.e. Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD'). This time I got the following results:- * Title: TestPDF * path: http://SharepointServer/Shared Documents/TestPDF.pdf * author: null * isDocument: true Again note the path. While the search results from 2nd method are useful as they provide me the file path exactly, I can't seem to understand why is the method 1 not giving me the same results? Why is there a discrepancy between the two results?

    Read the article

  • Replacement for deprecated SQL Server User Defined Type with a bound Rule and Default

    - by Adam Jones
    We have a User Defined Data Type of YesNo which has an which is an alias for char(1). The type has a bound Rule (must be Y or N) and a Default (N). The aim of this is that when any of the development team create a new field of type YesNo the rule and default are automatically bound to the new column. Rules and Defaults have been deprecated and won't be available in the next a future version of SQL Server, is there another way to achieve the same functionality? I should add that I'm aware that I could use CHECK and DEFAULT constraints to replicate the functionality of the bound Rule and Defalut objects, however these would have to be applied at each usage of the type, rather than getting the functionality 'for free' by using a UDT which has a bound Rule and Default. The post relates to a database that backs an existing application, rather than a new development, so I'm aware that our use of UDT's is less than optimal. I suspect the answer to the question is 'No', however normally when features are deprecated there's usually an alternative syntax that can be used as a drop in replacement so I wanted to pose the question in-case someone knew of an alternative.

    Read the article

  • multithreading issue

    - by vbNewbie
    I have written a multithreaded crawler and the process is simply creating threads and having them access a list of urls to crawl. They then access the urls and parse the html content. All this seems to work fine. Now when I need to write to tables in a database is when I experience issues. I have 2 declared arraylists that will contain the content each thread parse. The first arraylist is simply the rss feed links and the other arraylist contains the different posts. I then use a for each loop to iterate one while sequentially incrementing the other and writing to the database. My problem is that each time a new thread accesses one of the lists the content is changed and this affects the iteration. I tried using nested loops but it did not work before and this works fine using a single thread.I hope this makes sense. Here is my code: SyncLock dlock For Each l As String In links finallinks.Add(l) Next End SyncLock SyncLock dlock For Each p As String In posts finalposts.Add(p) Next End SyncLock ... Dim i As Integer = 0 SyncLock dlock For Each rsslink As String In finallinks postlink = finalposts.Item(i) i = i + 1 finallinks and finalposts are the two arraylists. I did not include the rest of the code which shows the threads working but this is the essential part where my error occurs which is basically here postlink = finalposts.Item(i) i = i + 1 ERROR: index was out of range. Must be non-negative and less than the size of the collection Is there an alternative?

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • How can I change some specific carps into croaks in Perl?

    - by sid_com
    I tried to catch a carp-warning: carp "$start is > $end" if (warnings::enabled()); ) with eval {} but it didn't work, so I looked in the eval documentation and I discovered, that eval catches only syntax-errors, run-time-errors or executed die-statements. How could I catch a carp warning? #!/usr/bin/env perl use warnings; use strict; use 5.012; use List::Util qw(max min); use Number::Range; my @array; my $max = 20; print "Input (max $max): "; my $in = <>; $in =~ s/\s+//g; $in =~ s/(?<=\d)-/../g; eval { my $range = new Number::Range( $in ); @array = sort { $a <=> $b } $range->range; }; if ( $@ =~ /\d+ is > \d+/ ) { die $@ }; # catch the carp-warning doesn't work die "Input greater than $max not allowed $!" if defined $max and max( @array ) > $max; die "Input '0' or less not allowed $!" if min( @array ) < 1; say "@array";

    Read the article

  • Defining < for STL sort algorithm - operator overload, functor or standalone function?

    - by Andy
    I have a stl::list containing Widget class objects. They need to be sorted according to two members in the Widget class. For the sorting to work, I need to define a less-than comparator comparing two Widget objects. There seems to be a myriad of ways to do it. From what I can gather, one can either: a. Define a comparison operator overload in the class: bool Widget::operator< (const Widget &rhs) const b. Define a standalone function taking two Widgets: bool operator<(const Widget& lhs, const Widget& rhs); And then make the Widget class a friend of it: class Widget { // Various class definitions ... friend bool operator<(const Widget& lhs, const Widget& rhs); }; c. Define a functor and then include it as a parameter when calling the sort function: class Widget_Less : public binary_function<Widget, Widget, bool> { bool operator()(const Widget &lhs, const Widget& rhs) const; }; Does anybody know which method is better? In particular I am interested to know if I should do 1 or 2. I searched the book Effective STL by Scott Meyer but unfortunately it does not have anything to say about this. Thank you for your reply.

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >