Search Results

Search found 9015 results on 361 pages for 'wireless range'.

Page 323/361 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • How do I overwrite a file currently being read by Python

    - by Brian
    Hi guys, I am not too sure the best way to word this, but what I want to do, is read a pdf file, make various modifications, and save the modified pdf over the original file. As of now, I am able to save the modified pdf to a separate file, but I am looking to replace the original, not create a new file. Here is my current code: from pyPdf import PdfFileWriter, PdfFileReader output = PdfFileWriter() input = PdfFileReader(file('input.pdf', 'rb')) blank = PdfFileReader(file('C:\\BLANK.pdf', 'rb')) # Copy the input pdf to the output. for page in range(int(input.getNumPages())): output.addPage(input.getPage(page)) # Add a blank page if needed. if (input.getNumPages() % 2 != 0): output.addPage(blank.getPage(0)) # Write the output to pdf. outputStream = file('input.pdf', 'wb') output.write(outputStream) outputStream.close() If i change the outputStream to a different file name, it works fine, I just cant save over the input file because it is still being used. I have tried to .close() the stream, but it was giving me errors as well. I have a feeling this has a fairly simple solution, I just haven't had any luck finding it. Thanks!

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Find three numbers appeared only once

    - by shilk
    In a sequence of length n, where n=2k+3, that is there are k unique numbers appeared twice and three numbers appeared only once. The question is: how to find the three unique numbers that appeared only once? for example, in sequence 1 1 2 6 3 6 5 7 7 the three unique numbers are 2 3 5. Note: 3<=n<1e6 and the number will range from 1 to 2e9 Memory limits: 1000KB , this implies that we can't store the whole sequence. Method I have tried(Memory limit exceed): I initialize a tree, and when read in one number I try to remove it from the tree, if the remove returns false(not found), I add it to the tree. Finally, the tree has the three numbers. It works, but is Memory limit exceed. I know how to find one or two such number(s) using bit manipulation. So I wonder if we can find three using the same method(or some method similar)? Method to find one/two number(s) appeared only once: If there is one number appeared only once, we can apply XOR to the sequence to find it. If there are two, we can first apply XOR to the sequence, then separate the sequence into 2 parts by one bit of the result that is 1, and again apply XOR to the 2 parts, and we will find the answer.

    Read the article

  • haml - if-else with different identations

    - by egarcia
    Hi everyone, I'm trying to render a calendar with rails and haml. The dates used come from a variable called @dates. It is a Date range that contains the first and last days to be presented on the calendar. The first day is always sunday and the last one is always monday. I'm planning to render a typical calendar, with one column per weekday (sunday is going to be the first day of the week) using an html table. So, I need to put a %tr followed by a %td on sundays, but the rest of the days I just need a %td. I'm having trouble modelling that on haml. This seems to require different levels of identation, and that's something it doesn't like. Here's my failed attempt: %table %tr %th= t('date.day_names')[0] # Sunday %th= t('date.day_names')[1] %th= t('date.day_names')[2] %th= t('date.day_names')[3] %th= t('date.day_names')[4] %th= t('date.day_names')[5] %th= t('date.day_names')[6] # Monday [email protected] do |date| - if(date.wday == 0) # if date is sunday %tr %td=date.to_s - else %td=date.to_s This doesn't work the way I want. The %tds for the non-sunday days appear outside of the %tr: <tr> <td>2010-04-24</td> </tr> <td>2010-04-25</td> <td>2010-04-26</td> <td>2010-04-27</td> <td>2010-04-28</td> <td>2010-04-29</td> <td>2010-04-30</td> I tried adding two more spaces to the else but then haml complained about improper identation. What's the best way to do this? Note: I'm not interested on rendering the calendar using unordered lists. Please don't suggest that.

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • Reversible pseudo-random sequence generator

    - by user350651
    I would like some sort of method to create a fairly long sequence of random numbers that I can flip through backwards and forwards. Like a machine with "next" and "previous" buttons, that will give you random numbers. Something like 10-bit resolution (i.e. positive integers in a range from 0 to 1023) is enough, and a sequence of 100k numbers. It's for a simple game-type app, I don't need encryption-strength randomness or anything, but I want it to feel fairly random. I have a limited amount of memory available though, so I can't just generate a chunk of random data and go through it. I need to get the numbers in "interactive time" - I can easily spend a few ms thinking about the next number, but not comfortably much more than that. Eventually it will run on some sort of microcontroller, probably just an Arduino. I could do it with a simple linear congruential generator (LCG). Going forwards is simple, to go backwards I'd have to cache the most recent numbers and store some points at intervals so I can recreate the sequence from there. But maybe there IS some pseudo-random generator that allows you to go both forwards and forwards? It should be possible to hook up two linear feedback shift registers (LFSRs) to roll in different directions, no? Or maybe I can just get by with garbling the index number using a hash function of some sort? I'm going to try that first. Any other ideas?

    Read the article

  • Threading is slow and unpredictable?

    - by Jake
    I've created the basis of a ray tracer, here's my testing function for drawing the scene: public void Trace(int start, int jump, Sphere testSphere) { for (int x = start; x < scene.SceneWidth; x += jump) { for (int y = 0; y < scene.SceneHeight; y++) { Ray fired = Ray.FireThroughPixel(scene, x, y); if (testSphere.Intersects(fired)) sceneRenderer.SetPixel(x, y, Color.Red); else sceneRenderer.SetPixel(x, y, Color.Black); } } } SetPixel simply sets a value in a single dimensional array of colours. If I call the function normally by just directly calling it it runs at a constant 55fps. If I do: Thread t1 = new Thread(() => Trace(0, 1, testSphere)); t1.Start(); t1.Join(); It runs at a constant 50fps which is fine and understandable, but when I do: Thread t1 = new Thread(() => Trace(0, 2, testSphere)); Thread t2 = new Thread(() => Trace(1, 2, testSphere)); t1.Start(); t2.Start(); t1.Join(); t2.Join(); It runs all over the place, rapidly moving between 30-40 fps and sometimes going out of that range up to 50 or down to 20, it's not constant at all. Why is it running slower than it would if I ran the whole thing on a single thread? I'm running on a quad core i5 2500k.

    Read the article

  • Should I define a single "DataContext" and pass references to it around or define muliple "DataConte

    - by Nate Bross
    I have a Silverlight application that consists of a MainWindow and several classes which update and draw images on the MainWindow. I'm now expanding this to keep track of everything in a database. Without going into specifics, lets say I have a structure like this: MainWindow Drawing-Surface Class1 -- Supports Drawing DataContext + DataServiceCollection<T> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) Class3 Each "Class" is passed a reference to the Drawing Surface so they can interact with it independently. I'm starting to use WCF Data Services in Class1 and its working well; however, the other classes are also going to need access to the WCF Data Services. (Should I define my "DataContext" in MainWindow and pass a reference to each child class?) Class1 will need READ access to the "transactions" data, and Class2 will need READ access to some of the drawing data. So my question is, where does it make the most sense to define my DataContext? Does it make sense to: Define a "global" WCF Data Service "Context" object and pass references to that in all of my subsequent classes? Define an instance of the "Context" for each Class1, Class2, etc Have each method that requires access to data define its own instance of the "Context" and use closures handle the async load/complete events? Would a structure like this make more sense? Is there any danger in keeping an active "DataContext" open for an extended period of time? Typical usecase of this application could range from 1 minute to 40+ minutes. MainWindow Drawing-Surface DataContext Class1 -- Supports Drawing DataServiceCollection<DrawingType> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) DataServiceCollection<TransactionType> w/events Class3 DataServiceCollection<T> w/events

    Read the article

  • How to secure login and member area with SSL certificate?

    - by citronas
    Background: I have a asp.net webapplication project that should contain a public and a member area. Now I want to implement a SSL decription to secure communication between the client and the server. (In the university we have a unsecured wireless network and you can use a wlan sniffer to read username/password. I do not want to have this security problem for my application, so I thought of a ssl decription) The application is running on a IIS 7.5. It it possible to have one webapp that has unsecured pages (like the public area) and a secured area (like the member area, which requires a login)? If yes, how can I relealise the communication between these too areas? Example: My webapp is hosted on http://foo.abc. I have pages like http://foo.abc/default.aspx and http://foo.abc/foo.aspx. In the same project is page like /member/default.aspx which is protected by a login on the page http://foo.abc/login.aspx. So I would need to implement SSL for the page /login.aspx and all pages in /member/ How can I do that? I just found out how to create SSL certificates in IIS 7.5 and how to add such a binding to a webapp. How how can I tell my webapp which page should be called with https and not with http. What is the best practise there?

    Read the article

  • remove values from an array php

    - by LiveEn
    I have a array of links and i have another array which contains certain values i would like to filter in the list eg: http://www.liquidshredder.co.uk/shop%3Fkw%3Dsurfboards%26fl%3D330343%26ci%3D3889610385%26network%3Ds http://www.bournemouth-surfing.co.uk/index.php%3FcPath%3D32 http://www.stcstores.co.uk/wetsuit-range-sizing--pricing-info-1-w.asp http://www.kingofwatersports.com/wetsuit-sale-c227.html http://www.uk.best-price.com/search/landing/query/bodyboards/s/google/altk/Surf%2Band/koid/1944273223/ http://www.surfinghardware.co.uk/Results.cfm%3Fcategory%3D20%26kw%3Dbodyboards%26fl%3D11407%26ci%3D3326979552%26network%3Ds http://www.teste.co.uk/adtrack/baod.html http://www.teste.co.uk/bodyboards/ www.sandskater.co.uk/ www.sandskater.co.uk/bodyboards/+Bodyboards&sa=X&ei=GwSWS-KaGM24rAeF-vCKDA&ved=0CBMQHzAKOAo http://www.extremesportstrader.co.uk/buy/water/bodyboarding/ www.extremesportstrader.co.uk/buy/water/bodyboarding/+Bodyboards&sa=X&ei=GwSWS-KaGM24rAeF-vCKDA&ved=0CBYQHzALOAo www.circle-one.co.uk/+Bodyboards&sa=X&ei=GwSWS-KaGM24rAeF-vCKDA&ved=0CBkQHzAMOAo http://www.teste.co.uk/bodyboards/p1 http://www.teste.co.uk/bodyboards/p2 http://www.amazon.co.uk/s/%3Fie%3DUTF8%26keywords%3Dbodyboards%26tag%3Dgooghydr-21%26index%3Daps%26hvadid%3D4764625891%26ref%3Dpd_sl_2lyzsfw1ar_e http://www.teste.co.uk/bodyboards/p3 www.extremesportstrader.co.uk/buy/water/ and i would like to remove all the instances of "http://www.teste.co.uk"? i tried the below code but it doesn't work :( $remove=array("teste.co.uk","127.0.0.1","localhost","wikipedia.org","gmail.com","answers.yahoo.com"); foreach ($list[0] as $key=>$clean) { if (in_array($clean,$remove)) { unset($list[0][$key]); } echo $clean; echo '<br>'; }

    Read the article

  • Ranking/ weighing search result

    - by biso
    I am trying to build an application that has a smart adaptive search engine (lets say for cars). If I search for for 4x4 then the DB will return all the 4x4 cars I have (100 cars) - but as time goes by and I start checking out cars, liking them, commenting on them, etc the order of the search result should be the different. That means 1 month later when searching for 4x4, I should get the same result set ordered differently as per my previous interaction with the site. If I was mainly liking and commenting on German cars, BMW should be on the top and Land cruiser should be further down. This ranking should be based on attributes that I captureduring user interaction (eg: car origin, user age, user location, car type[4x4, coupe, hatchback], price range). So for each car result I get, I will be weighing it based on how well it is performing on the 5 attributes above. I intend to use the DB just as a repository and do the ranking and the thinking on the server. My question is, what kind of algorithm should I be using to weigh/rank my search result? Thanks.

    Read the article

  • php fopen function dies, though I have file permissions set to read and write

    - by Matthew Robert Keable
    I'm following a tutorial on php, and am having difficulty getting this to work. I set the appropriate directory permissions to read and write, but every time I run this, I get the die string. The code is: $ourFileName = "testFile.txt"; $ourFileHandle = fopen($ourFileName, 'w') or die("can't open file"); fclose($ourFileHandle); As far as my basic understanding goes, if "testFile.txt" does not exist, fopen should create that file (I have basic knowledge of Python, and remember this same principle in that language). But it...it doesn't. Even if I create the aforementioned file, and put it up, that line of code still returns a die string. My hosting account does not give me permission to execute. Is this a problem? My server runs on Windows. I am using Dreamweaver CS5, on OSX 10.5.8. I've done some searching on this, and see other people having similar issues - but none of them keyed to exactly my range of problems. Being that I'm a beginner, I feel that it might be something I'm overlooking. Thanks!!

    Read the article

  • Multiple elements with the same name with SimpleXML and Java

    - by LouieGeetoo
    I'm trying to use SimpleXML to parse an XML document (an ItemLookupResponse for a book from the Amazon Product Advertising API) which contains the following element: <ItemAttributes> <Author>Shane Conder</Author> <Author>Lauren Darcey</Author> <Manufacturer>Pearson Educacion</Manufacturer> <ProductGroup>Book</ProductGroup> <Title>Android Wireless Application Development: Barnes & Noble Special Edition</Title> </ItemAttributes> My problem is that I don't know how to deal with the multiple possible Author elements. Here's what I have right now for the corresponding POJO (Plain Old Java Object), keeping in mind that it's not handling the case of multiple Authors: @Element public class ItemAttributes { @Element public String Author; @Element public String Manufacturer; @Element public String Title; } (I don't care about the ProductGroup, so it's not in the class -- I'm just setting SimpleXML's strict mode to off to allow for that.) I couldn't find an example in the documentation that corresponded with such a case. Using an ElementList with (inline=true) seemed along the right lines, but I didn't see how to do it for String (as opposed to a separate Author class, which I have no need for and don't see how it would even work). Here's a similar question and answer, but for PHP: php - simpleXML how to access a specific element with the same name as others? I don't know what the Java equivalent would be to the accepted answer. Thanks in advance.

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Losing URI segments when paginating with CodeIgniter

    - by Danny Herran
    I have a /payments interface where the user should be able to filter via price range, bank, and other stuff. Those filters are standard select boxes. When I submit the filter form, all the post data goes to another method called payments/search. That method performs the validation, saves the post values into a session flashdata and redirects the user back to /payments passing the flashdata name via URL. So my standard pagination links with no filters are exactly like this: payments/index/20/ payments/index/40/ payments/index/60/ And if you submit the filter form, the returning URL is: payments/index/0/b48c7cbd5489129a337b0a24f830fd93 This works just great. If I change the zero for something else, it paginates just fine. The only issue however is that the << 1 2 3 4 page links wont keep the hash after the pagination offset. CodeIgniter is generating the page links ignoring that additional uri segment. My uri_segment config is already set to 3: $config['uri_segment'] = 3; I cannot set the page offset to 4 because that hash may or may not exists. Any ideas of how can I solve this? Is it mandatory for CI to have the offset as the last segment in the uri? Maybe I am trying an incorrect approach, so I am all ears. Thank you folks.

    Read the article

  • problem with evolutionary algorithms degrading into simulated annealing: mutation too small?

    - by Schnalle
    i have a problem understanding evolutionary algorithms. i tried using this technique several times, but i always ran into the same problem: degeneration into simulated annealing. lets say my initial population, with fitness in brackets, is: A (7), B (9), C (14), D (19) after mating and mutation i have following children: AB (8.3), AC (12.2), AD (14.1), BC(11), BD (14.7), CD (17) after elimination of the weakest, we get A, AB, B, AC next turn, AB will mate again with a result around 8, pushing AC out. next turn, AB again, pushing B out (assuming mutation changes fitness mostly in the 1 range). now, after only a few turns the pool is populated with the originally fittest candidates (A, B) and mutations of those two (AB). this happens regardless of the size of the initial pool, it just takes a bit longer. say, with an initial population of 50 it takes 50 turns, then all others are eliminated, turning the whole setup in a more complicated simulated annealing. in the beginning i also mated canditates with themselves, worsening the problem. so, what do i miss? are my mutation rates simply too small and will it go away if i increase them? here's the project i'm using it for: http://stefan.schallerl.com/simuan-grid-grad/ yeah, the code is buggy and the interface sucks, but i'm too lazy to fix it right now - and be careful, it may lock up your browser. better use chrome, even thought firefox is not slower than chrome for once (probably the tracing for the image comparison pays off, yay!). if anyone is interested, the code can be found here. here i just dropped the ev-alg idea and went for simulated annealing. ps: i'm not even sure about simulated annealing - it is like evolutionary algorithms, just with a population size of one, right?

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • making binned boxplot in matplotlib with numpy and scipy in Python

    - by user248237
    I have a 2-d array containing pairs of values and I'd like to make a boxplot of the y-values by different bins of the x-values. I.e. if the array is: my_array = array([[1, 40.5], [4.5, 60], ...]]) then I'd like to bin my_array[:, 0] and then for each of the bins, produce a boxplot of the corresponding my_array[:, 1] values that fall into each box. So in the end I want the plot to contain number of bins-many box plots. I tried the following: min_x = min(my_array[:, 0]) max_x = max(my_array[:, 1]) num_bins = 3 bins = linspace(min_x, max_x, num_bins) elts_to_bins = digitize(my_array[:, 0], bins) However, this gives me values in elts_to_bins that range from 1 to 3. I thought I should get 0-based indices for the bins, and I only wanted 3 bins. I'm assuming this is due to some trickyness with how bins are represented in linspace vs. digitize. What is the easiest way to achieve this? I want num_bins-many equally spaced bins, with the first bin containing the lower half of the data and the upper bin containing the upper half... i.e., I want each data point to fall into some bin, so that I can make a boxplot. thanks.

    Read the article

  • Why does adding Crossover to my Genetic Algorithm give me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >