Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 769/969 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • How to remove the error "Cant find PInvoke DLL SQLite.interop.dll"

    - by Shailesh Jaiswal
    I am developing windows mobile application. I am using the SQLlite database. I am using the following code to connect to this database as follows SQLiteConnection cn = new SQLiteConnection(); SQLiteDataReader SQLiteDR; cn.ConnectionString = @"Data Source=F:\CompNetDB.db3"; cn.Open(); SQLiteCommand cmd = new SQLiteCommand(); cmd.CommandText = "select * from CustomerInfo"; cmd.CommandType = CommandType.Text; cmd.Connection = cn; SQLiteDR = cmd.ExecuteReader(); In the above case I am getting the error "Cant find PInvike DLL SQLite.interop.dll". I have added the DLL System.Data.SQLLite from the \SQLite.NET\bin\compactframework this folder. This is the folder which is installed by default when I installed the SQLite. In the same folder there is one DLL file named SQLlite.Interop.66.DLL. When I try to add reference to this dll it is giving error that dll can not be added. Are the two dlls SQLlite.Interop.dll & System.Interop.066.dll same ? In the above code how to solve the error "Cant find PInvoke.SQLite.Interop.dll" Please can you tell whether there is mistake in my code or I am missing something in my application?

    Read the article

  • F#, Linux and makefiles

    - by rwallace
    I intend to distribute an F# program as both binary and source so the user has the option of recompiling it if desired. On Windows, I understand how to do this: provide .fsproj and .sln files, which both Visual Studio and MSBuild can understand. On Linux, the traditional solution for C programs is a makefile. This depends on gcc being directly available, which it always is. The F# compiler can be installed on Linux and works under Mono, so that's fine so far. However, as far as I can tell, it doesn't create a scenario where fsc runs the compiler, instead the command is mono ...path.../fsc.exe. This is also fine, except I don't know what the path is going to be. So the full command to run the compiler in my case could be mono ~/FSharp-2.0.0.0/bin/fsc.exe types.fs tptp.fs main.fs -r FSharp.PowerPack.dll except that I'm not sure where fsc.exe will actually be located on the user's machine. Is there a way to find that out within a makefile, or would it be better to fall back on just explaining the above in the documentation and relying on the user to modify the command according to his setup?

    Read the article

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Messaging pattern question

    - by Al Bundy
    Process A is calculating values for objects a1, a2, a3 etc. and is sending results to the middleware queue (RabbitMQ). Consumers read the queue and process these results further. Periodically process A has to send a snapshot of these values, so consumers could do some other calculations. Values for these objects might change independently. The queue might look like this a1, a1, a2, a1, a2, a2, a3... Consumers process each item in the queue. The snapshot has to contain all objects and consumers will process this message for all objects in one go. So the requirement is to have a queue like this: a1, a1, a3, a2, a2, [snapshot, a1, a2, a3], a3, a1 ... The problem is that these items are of different types: one type for objects like a1, a2 and other for a snapshot. This means that they should be processed in a diferent queues, but in that case there is a race condition: consumers might process objects before processing a snapshot. Is there any pattern to solve this (quite common) problem? We are using RabbitMQ for message queueing.

    Read the article

  • Are licenses relevant for small code snippets?

    - by Martin
    When I'm about to write a short algorithm, I first check in the base class library I'm using whether the algorithm is implemented in it. If not, I often do a quick google search to see if someone has done it before (which is the case, 19 times out of 20). Most of the time, I find the exact code I need. Sometimes it's clear what license applies to the source code, sometimes not. It may be GPL, LGPL, BSD or whatever. Sometimes people have posted a code snippet on some random forum which solves my problem. It's clear to me that I can't reuse the code (copy/paste it into my code) without caring about the license if the code is in some way substantial. What is not clear to me is whether I can copy a code snippet containing 5 lines or so without doing a license violation. Can I copy/paste a 5-line code snippet without caring about the license? What about one-liner? What about 10 lines? Where do I draw the line (no pun intended)? My second problem is that if I have found a 10-line code snippet which does exactly what I need, but feel that I cannot copy it because it's GPL-licensed and my software isn't, I have already memorized how to implement it so when I go around implementing the same functionality, my code is almost identical to the GPL licensed code I saw a few minutes ago. (In other words, the code was copied to my brain and my brain after that copied it into my source code).

    Read the article

  • Yaml::load_file acting different between development and production (Rails)

    - by James
    Hi, I am completely stumped at the nature of this problem. We export data from our application into a 'cleaned' YAML file (stripping out IDs, created_at etc). Then we (will) allow users to import these files back into the application - it is the import that is completely bugging me out. In development, YAML::load_file(params[:uploaded_data].local_path) returns an array of YAML::Objects's (and it doesn't matter which of the number of different ways the file could be loaded): [#{"exception_count"="0", "title"="Start", "amount"="70.00", "colour"=nil, "repeat_type_id"="0", "repeat_interval"="1"}}, etc etc] Which is very nice, as the attributes also include the (associated model) exceptions that you see an exception_count for. However on production (rails 2.3.2, running REE 1.8.7 and 1.8.6 for testing, tested on two different production env's, and running production locally) it returns an array of the Objects within the YAML - in this case, Event: [#, repeat_type_id: 0, colour: nil, repeat_interval: 1, exception_count: 0, etc etc] Now this would be just perplexing if it also included the associated model Exception with it - however it doesn't. Can anyone at all shed some light on why the Yaml parser would behave so differently between production and development? I'm on rails 2.3.2, running REE 1.8.7; however I've also tested running Ruby 1.8.6 with exactly the same results. Thanks for any help!

    Read the article

  • Destructuring assignment in JavaScript

    - by Anders Rune Jensen
    As can be seen in the Mozilla changlog for JavaScript 1.7 they have added destructuring assignment. Sadly I'm not very fond of the syntax (why write a and b twice?): var a, b; [a, b] = f(); Something like this would have been a lot better: var [a, b] = f(); That would still be backwards compatible. Python-like destructuring would not be backwards compatible. Anyway the best solution for JavaScript 1.5 that I have been able to come up with is: function assign(array, map) { var o = Object(); var i = 0; $.each(map, function(e, _) { o[e] = array[i++]; }); return o; } Which works like: var array = [1,2]; var _ = assign[array, { var1: null, var2: null }); _.var1; // prints 1 _.var2; // prints 2 But this really sucks because _ has no meaning. It's just an empty shell to store the names. But sadly it's needed because JavaScript doesn't have pointers. On the plus side you can assign default values in the case the values are not matched. Also note that this solution doesn't try to slice the array. So you can't do something like {first: 0, rest: 0}. But that could easily be done, if one wanted that behavior. What is a better solution?

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Just messed up a server misusing chown, how to execute it correctly?

    - by Jack Webb-Heller
    Hi! I'm moving from an old shared host to a dedicated server at MediaTemple. The server is running Plesk CP, but, as far as I can tell, there's no way via the Interface to do what I want to do. On the old shared host, running cPanel, I creative a .zip archive of all the website's files. I downloaded this to my computer, then uploaded it with FTP to the new host account I'd set up. Finally, I logged in via SSH, navigated to the directory the zip was stored in (something like var/www/vhosts/mysite.com/httpdocs/ and ran the unzip command on the file sitearchive.zip. This extracted everything just the fine. The site appeared to work just fine. The problem: When I tried to edit a file through FTP, I got Error - 160: Permission Denied. When I Get Info for the file I'm trying to edit, it says the owner and group is swimwir1. I attemped to use chown at this point to change owner - and yes, as you may be able to tell, I'm a little inexperienced in SSH ;) luckily the server was new, since the command I ran - chown -R newuser / appeared to mess a load of stuff up. The reason I used / on the end rather than /var/www/vhosts/mysite.com/httpdocs/ was because I'd already cded into their, so I presumed the / was relative to where I was working. This may be the case, I have no idea, either way - Plesk was no longer accessible, although Apache and things continued to work. I realised my mistake, and deciding it wasn't worth the hassle of 1) being an amateur and 2) trying to fix it, I just reprovisioned the server to start afresh. So - what do I do to change the owner of these files correctly? Thanks for helping out a confused beginner! Jack

    Read the article

  • How can I programmatically add more than just one view object to my view controller?

    - by BeachRunnerJoe
    I'm diving into iPhone OS development and I'm trying to understand how I can add multiple view objects to the "Left/Root" view of my SplitView iPad app. I've figured out how to programmatically add a TableView to that view based on the example code I found in Apple's online documentation... RootViewController.h @interface RootViewController : UITableViewController <NSFetchedResultsControllerDelegate, UITableViewDelegate, UITableViewDataSource> { DetailViewController *detailViewController; UITableView *tableView; NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; } RootViewController.m - (void)loadView { UITableView *newTableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame] style:UITableViewStylePlain]; newTableView.autoresizingMask = UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth; newTableView.delegate = self; newTableView.dataSource = self; [newTableView reloadData]; self.view = newTableView; [newTableView release]; } but there are a few things I don't understand about it and I was hoping you veterans could help clear up some confusion. In the statement self.view = newTableView, I assume I'm setting the entire view to a single UITableView. If that's the case, then how can I add additional view objects to that view alongside the table view? For example, if I wanted to have a DatePicker view object and the TableView object instead of just the TableView object, then how would I programmatically add that? Referencing the code above, how can I resize the table view to make room for the DatePicker view object that I'd like to add? Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • Combo box in a scrollable panel causing problems

    - by Dennis
    I have a panel with AutoScroll set to true. In it, I am programmatically adding ComboBox controls. If I add enough controls to exceed the viewable size of the panel a scroll bar appears (so far so good). However, if I open one of the combo boxes near the bottom of the viewable area the combo list isn't properly displayed and the scrollable area seems to be expanded. This results in all of the controls being "pulled" to the new bottom of the panel with some new blank space at the top. If I continue to tap on the drop down at the bottom of the panel the scrollable area will continue to expand indefinitely. I'm anchoring the controls to the left, right and top so I don't think anchoring is involved. Is there something obvious that could be causing this? Update: It looks like the problem lies with anchoring the controls to the right. If I don't anchor to the right then I don't get the strange behavior. However, without right anchoring the control gets cut off by the scroll bar. Here's a simplified test case I built that shows the issue: public Form1() { InitializeComponent(); Panel panel = new Panel(); panel.Size = new Size(80, 200); panel.AutoScroll = true; for (int i = 0; i < 10; ++i) { ComboBox cb = new ComboBox(); cb.Anchor = AnchorStyles.Left | AnchorStyles.Right | AnchorStyles.Top; cb.Items.Add("Option 1"); cb.Items.Add("Option 2"); cb.Items.Add("Option 3"); cb.Items.Add("Option 4"); cb.Location = new Point(0, i * 24); panel.Controls.Add(cb); } Controls.Add(panel); } If you scroll the bottom of the panel and tap on the combo boxes near the bottom you'll notice the strange behavior.

    Read the article

  • How do I locate a particular word in a text file using .NET

    - by cmrhema
    I am sending mails (in asp.net ,c#), having a template in text file (.txt) like below User Name :<User Name> Address : <Address>. I used to replace the words within the angle brackets in the text file using the below code StreamReader sr; sr = File.OpenText(HttpContext.Current.Server.MapPath(txt)); copy = sr.ReadToEnd(); sr.Close(); //close the reader copy = copy.Replace(word.ToUpper(),"#" + word.ToUpper()); //remove the word specified UC //save new copy into existing text file FileInfo newText = new FileInfo(HttpContext.Current.Server.MapPath(txt)); StreamWriter newCopy = newText.CreateText(); newCopy.WriteLine(copy); newCopy.Write(newCopy.NewLine); newCopy.Close(); Now I have a new problem, the user will be adding new words within an angle, say for eg, they will be adding <Salary>. In that case i have to read out and find the word <Salary>. In other words, I have to find all the words, that are located with the angle brackets (<). How do I do that?

    Read the article

  • Detecting Singularities in a Graph

    - by nasufara
    I am creating a graphing calculator in Java as a project for my programming class. There are two main components to this calculator: the graph itself, which draws the line(s), and the equation evaluator, which takes in an equation as a String and... well, evaluates it. To create the line, I create a Path2D.Double instance, and loop through the points on the line. To do this, I calculate as many points as the graph is wide (e.g. if the graph itself is 500px wide, I calculate 500 points), and then scale it to the window of the graph. Now, this works perfectly for most any line. However, it does not when dealing with singularities. If, when calculating points, the graph encounters a domain error (such as 1/0), the graph closes the shape in the Path2D.Double instance and starts a new line, so that the line looks mathematically correct. Example: However, because of the way it scales, sometimes it is rendered correctly, sometimes it isn't. When it isn't, the actual asymptotic line is shown, because within those 500 points, it skipped over x = 2.0 in the equation 1 / (x-2), and only did x = 1.98 and x = 2.04, which are perfectly valid in that equation. Example: In that case, I increased the window on the left and right one unit each. My question is: Is there a way to deal with singularities using this method of scaling so that the resulting line looks mathematically correct? I myself have thought of implementing a binary search-esque method, where, if it finds that it calculates one point, and then the next point is wildly far away from the last point, it searches in between those points for a domain error. I had trouble figuring out how to make it work in practice, however. Thank you for any help you may give!

    Read the article

  • How does the last integer promotion rule ever get applied in C?

    - by SiegeX
    6.3.1.8p1: Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. For the bolded rule to be applied it would seem to imply you need to have have an unsigned interger type who's rank is less than the signed integer type and the signed integer type cannot hold all the values of the unsigned integer type. Is there a real world example of such a case or is this statement serving as a catch-all to cover all possible permutations?

    Read the article

  • Printing out this week's dates in perl

    - by ABach
    Hi folks, I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works - but is it the quickest/most efficient/etc.? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • UIViewController remaining

    - by Guy
    Hi Guys. I have a UIViewController named equationVC who's user interface is being programmatically created from another NSObject class called equationCon. Upon loading equationVC, a method called chooseInterface is called from the equationCon class. I have a global variable (globalVar) that points to a user defined string. chooseInterface finds a method in the equationCon class that matches the string globalVar points to. In this case, let's say that globalVar points to a string that is called "methodThatMatches." In methodThatMatches, another view controller needs to show the results of what methodThatMatches did. methodThatMatches creates a new equationVC that calls upon methodThatMatches2. As a test, each method changes the color of the background. When the application starts up, I get a purple background, but as soon as I hit backwards I get another purple screen, which should be yellow. I do not think that I am release the view properly. Can anyone help? -(void)chooseInterface { NSString* equationTemp = [globalVar stringByReplacingOccurrencesOfString:@" " withString:@""]; equationTemp = [equationTemp stringByReplacingOccurrencesOfString:@"'" withString:@""]; SEL equationName = NSSelectorFromString(equationTemp); NSLog(@"selector! %@",NSStringFromSelector(equationName)); if([self respondsToSelector:equationName]){ [self performSelector:equationName]; } } -(void)methodThatMatches{ self.equationVC.view.backgroundColor = [UIColor yellowColor]; [setGlobalVar:@"methodThatMatches2"]; EquationVC* temp = [[EquationVC alloc] init]; [[self.equationVC navigationController] pushViewController:temp animated:YES ]; [temp release]; } -(void)methodThatmatches2{ self.equationVC.view.backgroundColor = [UIColor purpleColor]; }

    Read the article

  • Flex wordwrap issue with multiple text instances

    - by Craig Myles
    Hi, I have a scenario where I want to dynamically add words of text to a container so that it forms a paragraph of text which is wrapped neatly according to the size of the parent container. Each text element will have differing formatting, and will have differing user interaction options. For example, imagine the text " has just spoken out about ". Each word will be added to the container one at a time, at run time. The username in this case would be bold, and if clicked on will trigger an event. Same with the news article. The rest of the text is just plain text which, when clicked on, would do nothing. Now, I'm using Flex 3 so I don't have access to the fancy new text formatting tools. I've implemented a solution where the words are plotted onto a canvas, but this means that the words are wrapped at a particular y position (an arbitrary value I've chosen). When the container is resized, the words still wrap at that position which leaves lots of space. I thought about adding each text element to an Array Collection and using this as a datasource for a Tile List, but Tile Lists don't support variable column widths (in my limited knowledge) so each word would use the same amount of space which isn't ideal. Does anyone know how I can plot words onto a container so that I can retain formatting, events and word wrapping at paragraph level, even if the container is resized?

    Read the article

  • python 'with' statement

    - by Stephen
    Hi, I'm using Python 2.5. I'm trying to use this 'with' statement. from __future__ import with_statement a = [] with open('exampletxt.txt','r') as f: while True: a.append(f.next().strip().split()) print a The contents of 'exampletxt.txt' are simple: a b In this case, I get the error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/tmp/python-7036sVf.py", line 5, in <module> a.append(f.next().strip().split()) StopIteration And if I replace f.next() with f.read(), it seems to be caught in an infinite loop. I wonder if I have to write a decorator class that accepts the iterator object as an argument, and define an __exit__ method for it? I know it's more pythonic to use a for-loop for iterators, but I wanted to implement a while loop within a generator that's called by a for-loop... something like def g(f): while True: x = f.next() if test(x): a = x elif test(x): b = f.next() yield [a,x,b] a = [] with open(filename) as f: for x in g(f): a.append(x)

    Read the article

  • Realizing program with decision tree logics

    - by Vytas999
    The system realizes a game “Think animal”. Main use case is: 1. System offers user to think about any animal and the system will try to guess it 2. The system starts asking questions from the start of decision tree. Ex., “Question: It has fur?”, and provides possible answers – yes or no. 3. If the user answers Yes, the system proceeds to these steps: a. System tries to guess animal that has that feature, ex. “My guess: Is it bear?” and provides with possible answers – yes or no. b. If the user answer is Yes, the system offers to think off another animal 4. If the user answers is No, the system moves to No node in decision tree and moves to 2 step (and starts from asking from new node). 5. If system runs out of nodes (i.e., empty yes or no node was reached): a. the system announces that it has given up, and ask user to enter: i. What animal he had in mind ii. What is his characteristic feature b. User enters requested data c. The system creates a new node and links it to yes or no of last active node. Where i can get some information and some examples, when implementing decision tree logics in MS SQL Server and C#..? Any information will be usefull. Thanks

    Read the article

  • Best way for launching html/jsp to communicate with GWT module

    - by h2g2java
    I asked this at the GWT forum but I'm impatient for the answer and I seem to get rather good responses here. A html or jsp file is used to launch the xxx.nocache.js, which then decides which browser "permutation" to use. <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8"> <title>xxx</title> <script type="text/javascript" language="javascript" src="xxx.nocache.js"></script> </head> In my case, I am using a jsp. When the JSP is executed, it discovers some conditions. I wish to pass these conditions as values to the GWT module being launched. The "elegant" GWT way to pass these values would be to persist them as request/memcache attributes and then have the GWT module perform RPC to retrieve those values. For example, the JSP discovers that the current user is Whoopy. Shouldn't I simply have the JSP generate javascript to store user = "Whoopy" as a top or namedframe level javascript variable and use JSNI within the module to retrieve the value for user? I have not tried it yet, but I would like to know how anyone might have done it without having to use RPC.

    Read the article

  • Link click does nothing after ajax switching?

    - by Neil
    An odd case I'm trying to figure out here. I'm trying to design a mailbox system, and making some of the options ajax-y. Here's the scenario: We have a page with 2 tabs, inbox and compose. Inbox is a essentially a list of links of the form mailbox.php?msg=xxx. Clicking on the inbox or compose tabs does an ajax switch. So, let's say we're on an message page: mailbox.php?msg=123 I click on "compose" - it ajax switches to a compose form. I change my mind, click on "inbox" - it goes back to a list of messages. Note, the url has not changed at this point (all has been done through ajax). I click on the same message as before. It should go back into that message. However, nothing happens! The url it should go to (mailbox.php?msg=123) IS the url showing in the address bar, but, due to the earlier ajax activity, it's showing the inbox. Thoughts on how to resolve this? And, out of curiosity, an explanation? Normally, clicking on a link that takes you to a page you're already on will reload the page. Thanks!

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >