Search Results

Search found 2041 results on 82 pages for 'distribution'.

Page 72/82 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Licensing iPhone apps per user in existing system

    - by Alxandr
    I've been asked by my job to write a iPhone app for an existing system for managing worktasks. This system is proprietary and costs money, so in order to login you need to be a customer. Now, I've got two questions about the legality of licensing iPhone apps with this system: My company would like to be able to sell the app for profit, but not as a one-time payment, but as a added subscription-fee to the already existing one. Is it legal for us (according with the terms of distributing an iPhone app on the Apple App Store) to do this? That way we'll just add another field to the users-database saying weather or not iPhone is enabled for them, and distribute the app as a free app on App Store. If the previous question is not legal, we'd like to just create a free app and distribute it as part of the existing system. In other words, no extra fee for using the iPhone app for the users, but still free distribution trough App Store. Due to our company not being american or having an office in the U.S. at all enterprice account is not an option. Please let me know if there is anything wrong with any of the above approaches.

    Read the article

  • Building a ctypes-"based" C library with distutils

    - by Robie Basak
    Following this recommendation, I have written a native C extension library to optimise part of a Python module via ctypes. I chose ctypes over writing a CPython-native library because it was quicker and easier (just a few functions with all tight loops inside). I've now hit a snag. If I want my work to be easily installable using distutils using python setup.py install, then distutils needs to be able to build my shared library and install it (presumably into /usr/lib/myproject). However, this not a Python extension module, and so as far as I can tell, distutils cannot do this. I've found a few references to people other people with this problem: Someone on numpy-discussion with a hack back in 2006. Somebody asking on distutils-sig and not getting an answer. Somebody asking on the main python list and being pointed to the innards of an existing project. I am aware that I can do something native and not use distutils for the shared library, or indeed use my distribution's packaging system. My concern is that this will limit usability as not everyone will be able to install it easily. So my question is: what is the current best way of distributing a shared library with distutils that will be used by ctypes but otherwise is OS-native and not a Python extension module? Feel free to answer with one of the hacks linked to above if you can expand on it and justify why that is the best way. If there is nothing better, at least all the information will be in one place.

    Read the article

  • Working with complex objects in Prevayler commands

    - by alexantd
    The demos included in the Prevayler distribution show how to pass in a couple strings (or something simple like that) into a command constructor in order to create or update an object. The problem is that I have an object called MyObject that has a lot of fields. If I had to pass all of them into the CreateMyObject command manually, it would be a pain. So an alternative I thought of is to pass my business object itself into the command, but to hang onto a clone of it (keeping in mind that I can't store the BO directly in the command). Of course after executing this command, I would need to make sure to dispose of the original copy that I passed in. public class CreateMyObject implements TransactionWithQuery { private MyObject object; public CreateMyObject(MyObject business_obj) { this.object = (MyObject) business_obj.clone(); } public Object executeAndQuery(...) throws Exception { ... } } The Prevayler wiki says: Transactions can't carry direct object references (pointers) to business objects. This has become known as the baptism problem because it's a common beginner pitfall. Direct object references don't work because once a transaction has been serialized to the journal and then deserialized for execution its object references no longer refer to the intended objects - - any objects they may have referred to at first will have been copied by the serialization process! Therefore, a transaction must carry some kind of string or numeric identifiers for any objects it wants to refer to, and it must look up the objects when it is executed. I think by cloning the passed-in object I will be getting around the "direct object pointer" problem, but I still don't know whether or not this is a good idea...

    Read the article

  • Plot vectors with labels in matlab

    - by mad
    I have a Nx62 matrix with N 62-D vectors and a NX1 vector with the labels for the vectors. I am trying to plot these vectors with their labels because I want to see the behavior of these classes when plotted in a 62-dimensional space. The vectors belong to three classes according to the labels of a NX1 vector cited before. How to to that in matlab? when i do plot(vector,classes) the result is very weird to analyse, how to put labels in the graph? The code i am using to get the labels, vectors and plotting is the following: %labels is a vector with labels, vectors is a matrix where each line is a vector [labels,vectors]=libsvmread('features-im1.txt'); when I plot a three dimensional vector is simple a=[1,2,3] plot(a) and then I get the result but now i have a set of vectors and a set of labels, and i want to see the distribution of them, i want to plot each of these labels but also want to identify their classes. How to do that in matlab? EDIT: This code is almost working. The problem is the fact that for each vector and class the plot will assign a color. I just want three colors and three labels, one per class. [class,vector]=libsvmread('features-im1.txt'); %the plot doesn't allow negative and 0 values in the label class=class+2; labels = {'class -1','class 0','class 1'}; h = plot(vector); legend(h,labels{class})

    Read the article

  • How to generate a number in arbitrary range using random()={0..1} preserving uniformness and density?

    - by psihodelia
    Generate a random number in range [x..y] where x and y are any arbitrary floating point numbers. Use function random(), which returns a random floating point number in range [0..1] from P uniformly distributed numbers (call it "density"). Uniform distribution must be preserved and P must be scaled as well. I think, there is no easy solution for such problem. To simplify it a bit, I ask you how to generate a number in interval [-0.5 .. 0.5], then in [0 .. 2], then in [-2 .. 0], preserving uniformness and density? Thus, for [0 .. 2] it must generate a random number from P*2 uniformly distributed numbers. The obvious simple solution random() * (x - y) + y will generate not all possible numbers because of the lower density for all abs(x-y)>1.0 cases. Many possible values will be missed. Remember, that random() returns only a number from P possible numbers. Then, if you multiply such number by Q, it will give you only one of P possible values, scaled by Q, but you have to scale density P by Q as well.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • MATLAB setting matrix values in an array

    - by user324994
    I'm trying to write some code to calculate a cumulative distribution function in matlab. When I try to actually put my results into an array it yells at me. tempnum = ordered1(1); k=2; while(k<538) count = 1; while(ordered1(k)==tempnum) count = count + 1; k = k + 1; end if(ordered1(k)~=tempnum) output = [output;[(count/537),tempnum]]; k = k + 1; tempnum = ordered1(k); end end The errors I'm getting look like this ??? Error using ==> vertcat CAT arguments dimensions are not consistent. Error in ==> lab8 at 1164 output = [output;[(count/537),tempnum]]; The line to add to the output matrice was given to me by my TA. He didn't teach us much syntax throughout the year so I'm not really sure what I'm doing wrong. Any help is greatly appreciated.

    Read the article

  • Did I implement clock drift properly?

    - by David Titarenco
    I couldn't find any clock drift RNG code for Windows anywhere so I attempted to implement it myself. I haven't run the numbers through ent or DIEHARD yet, and I'm just wondering if this is even remotely correct... void QueryRDTSC(__int64* tick) { __asm { xor eax, eax cpuid rdtsc mov edi, dword ptr tick mov dword ptr [edi], eax mov dword ptr [edi+4], edx } } __int64 clockDriftRNG() { __int64 CPU_start, CPU_end, OS_start, OS_end; // get CPU ticks -- uses RDTSC on the Processor QueryRDTSC(&CPU_start); Sleep(1); QueryRDTSC(&CPU_end); // get OS ticks -- uses the Motherboard clock QueryPerformanceCounter((LARGE_INTEGER*)&OS_start); Sleep(1); QueryPerformanceCounter((LARGE_INTEGER*)&OS_end); // CPU clock is ~1000x faster than mobo clock // return raw return ((CPU_end - CPU_start)/(OS_end - OS_start)); // or // return a random number from 0 to 9 // return ((CPU_end - CPU_start)/(OS_end - OS_start)%10); } If you're wondering why I Sleep(1), it's because if I don't, OS_end - OS_start returns 0 consistently (because of the bad timer resolution, I presume). Basically, (CPU_end - CPU_start)/(OS_end - OS_start) always returns around 1000 with a slight variation based on the entropy of CPU load, maybe temperature, quartz crystal vibration imperfections, etc. Anyway, the numbers have a pretty decent distribution, but this could be totally wrong. I have no idea.

    Read the article

  • Assign sage variable values into R objects via sagetex and Sweave

    - by sheed03
    I am writing a short Sweave document that outputs into a Beamer presentation, in which I am using the sagetex package to solve an equation for two parameters in the beta binomial distribution, and I need to assign the parameter values into the R session so I can do additional processing on those values. The following code excerpt shows how I am interacting with sage: <<echo=false,results=hide>>= mean.raw <- c(5, 3.5, 2) theta <- 0.5 var.raw <- mean.raw + ((mean.raw^2)/theta) @ \begin{frame}[fragile] \frametitle{Test of Sage 2} \begin{sagesilent} var('a1, b1, a2, b2, a3, b3') eqn1 = [1000*a1/(a1+b1)==\Sexpr{mean.raw[1]}, ((1000*a1*b1)*(1000+a1+b1))/((a1+b1)^2*(a1+b1+1))==\Sexpr{var.raw[1]}] eqn2 = [1000*a2/(a2+b2)==\Sexpr{mean.raw[2]}, ((1000*a2*b2)*(1000+a2+b2))/((a2+b2)^2*(a2+b2+1))==\Sexpr{var.raw[2]}] eqn3 = [1000*a3/(a3+b3)==\Sexpr{mean.raw[3]}, ((1000*a3*b3)*(1000+a3+b3))/((a3+b3)^2*(a3+b3+1))==\Sexpr{var.raw[3]}] s1 = solve(eqn1, a1,b1) s2 = solve(eqn2, a2,b2) s3 = solve(eqn3, a3,b3) \end{sagesilent} Solutions of Beta Binomial Parameters: \begin{itemize} \item $\sage{s1[0]}$ \item $\sage{s2[0]}$ \item $\sage{s3[0]}$ \end{itemize} \end{frame} Everything compiles just fine, and in that slide I am able to see the solutions to the three equations respective parameters in that itemized list (for example the first item in the itemized list from that beamer slide is outputted as [a1=(328/667), b1=(65272/667)] (I am not able to post an image of the beamer slide but I hope you get the idea). I would like to save the parameter values a1,b1,a2,b2,a3,b3 into R objects so that I can use them in simulations. I cannot find any documentation in the sagetex package on how to save output from sage commands into variables for use with other programs (in this case R). Any suggestions on how to get these values into R?

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • Using boost asio for pub/sub style tcp in a game loop

    - by unohoo
    I have been reading through the boost asio documentation for a couple of hours now, and while I think the documentation is really great, I am still left a bit confused on how to implement the system that I need. I have to stream info, from a game engine, to a list of computers over tcp. One snag is that, unlike traditional pub/sub, the computer that does the distribution of info is actually the computer that has to connect to the subscribers as well (instead of the subscribers registering with the publisher). This is done via a config file - a list of ip's/ports along with the data that they each require. The subscribers listen, but do not know the ip of the publisher. (As a side note, I'm quite new to network programming, so maybe I'm missing something .. but it's strange that I do not find much information regarding this style of "inverted" client-server model..) I am looking for suggestions for the implementation of such a system using boost asio. Of course I have to integrate the networking into an already existing engine, so with regards to that: What would be a good way to handle messages being sent to multiple computers every frame? Use async_write, call io_service.run and then reset every frame? Would having io_service.run have its own thread be better? Or should I just use threads and use blocking writes?

    Read the article

  • C headers: compiler specific vs library specific?

    - by leonbloy
    Is there some clear-cut distinction between standard C *.h header files that are provided by the C compiler, as oppossed to those which are provided by a standard C library? Is there some list, or some standard locations? Motivation: int this answer I got a while ago, regarding a missing unistd.h in the latest TinyC compiler, the author argued that unistd.h (contrarily to sys/unistd.h) should not be provided by the compiler but by your C library. I could not make much sense of that response (for one thing shouldn't that also apply to, say, stdio.h?) but I'm still wondering about it. Is that correct? Where is some authoritative reference for this? Looking in other compilers, I see that other "self contained" POSIX C compilers that are hosted in Windows (like the GCC toolchain that comes with MinGW, in several incarnations; or Digital Mars compiler), include all header files. And in a standard Linux distribution (say, Centos 5.10) I see that the gcc package provides a few header files (eg, stdbool.h, syslimits.h) in /usr/lib/gcc/i386-redhat-linux/4.1.1/include/, and the glibc-headers package provides the majority of the headers in /usr/include/ (including stdio.h, /usr/include/unistd.h and /usr/include/sys/unistd.h). So, in neither case I see support for the above claim.

    Read the article

  • Connecting to an RMI object without registry

    - by Mark Probst
    I think I need to connect to a remote RMI object without going through the registry, but I don't know how. My situation is this: I'm implementing a simple job distribution service which consists of one distributor and multiple workers. The distributor has a registered RMI object to which clients connect to send jobs, and workers connect to accept jobs. Unfortunately the distributor and worker hosts are behind a firewall. To get to the distributor host I am tunneling two ports (one for the registry, one for the distributor object) via SSH, so I can get to the registry and the distributor from outside the firewall. To make that work I have to set "-Djava.rmi.server.hostname=localhost" on the distributor JVM so that the clients connect to their local, tunneled port, instead of the port on the actual distributor host, which is blocked. This creates a problem for the workers, though, because they need to connect to the distributor directly, but because of the "localhost" redirection they behave like clients and try to connect to a port on their own host, which is not available, because I'm not tunneling on the workers (it is impractical). Now, if I could connect to a remote object directly by giving the hostname and port, I could do away both with the registry on the distributor and the "localhost" hack, and make the workers connect properly. How do I do that? Or is there a different solution to this problem?

    Read the article

  • MySQLPython is ignoring my my.cnf file. Where does it get its information?

    - by ?????
    When I try to use MySQLPython (via SQLAlchemy) I get the error File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-x86_64.egg/MySQLdb/connections.py", line 188, in __init__ super(Connection, self).__init__(*args, **kwargs2) sqlalchemy.exc.OperationalError: (OperationalError) (2002, "Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2)") None None but no other mysql client on my machine sees it fine! My my.cnf file states: [client] port = 3306 socket = /tmp/mysql/mysql.sock [safe_mysqld] socket = /tmp/mysql/mysql.sock [mysqld_safe] socket = /tmp/mysql/mysql.sock [mysqld] socket = /tmp/mysql/mysql.sock port = 3306 and the mysql.sock file is, indeed, located in /tmp/mysql I verified that ~/.my.cnf and /var/lib/mysql/my.cnf aren't overriding it. The mysql5 client program, etc, has no trouble connecting and neither does a groovy/grails installation on the same machine using jdbc/mysql connection thrilllap-2:~ swirsky$ mysql5 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.1.47 Source distribution Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | test | +--------------------+ 2 rows in set (0.00 sec) mysql> Why can't MySQLdb for python figure this out? Where would it look if not the my.cnf files?

    Read the article

  • How to install previously-archived apps from xcode organizer to my iphone

    - by Ben Clayton
    Hi all. Xcode keeps an archive of all the versions of my apps that I've submitted to the app store in the 'archived applications' section. I assumed using this I could install an old version of an app to my device, in order to reproduce any problems my client may have had with that particular version. However, when I try to do this I get an error: 'this executable was signed with invalid entitlements, the entitlements specified in your applications code signing entitlements do not match those specified in your provisioning profile' The original app was signed using our App Store distribution certificate, and I use the Organizer interface to re-sign it using our Developer profile. select the archived app select the version I want to test click 'share' select 'iphone developer' next to identity save to disk (saves the ipa file) then copy the ipa to the device using the little + button you see next to 'applications' on the screen you get when you select the connected device. Then I get the error, and the app isn't installed. Is there something obvious I'm doing wrong here? Or is there a different process to re-install an archived app to my device? Thanks,

    Read the article

  • Python on Mac: Fink? MacPorts? Builtin? Homebrew? Binary installer?

    - by BastiBechtold
    For the last few days, I have been trying to use Python for some audio development. The thing is, Mac OSX does not handle uninstalling stuff well. Actually, there is no way to uninstall anything. Once it is on your system, you better pray that it didn't do any funny stuff. Hence, I don't really want to rely on installer packages for Python. So I turn to Homebrew and install Python using Homebrew. Works fabulously. Using pip, Numpy, SciPy, Matplotlib were no (big) problem, either. Now I want to play audio. There is a host of different packages out there, but pip does not seem willing to install any. But, there is a binary distribution for PyGame, which I guess should work with the built-in Python. Hence my question: What would you do? Would you just install the binary distributions and hope that they interoperate well and never need uninstalling? Would you hack your way through whichever package control management system you prefer and deal with its problems? Something else?

    Read the article

  • Producing a static HTML site from XML content

    - by Skilldrick
    I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here: I'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Learn some basic XSLT and produce HTML to the correct spec this way. Produce the site with PHP (for example) and then generate a static site. Write a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT. Useful information: The XML will likely change at some point, so I'd like to be able to easily regenerate the site. I'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content). I'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.

    Read the article

  • Uploading Binary iPhone App "The signature was invalid" again again and again...

    - by user338386
    Hello! I'm going crazy! I'm trying to upload the binary of my first application but I have always the same error! "The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate." I did everything, EVERYTHING!! I created the request for the certificate, used it for both developer and distribution certificate, created the provisioning profile (12 times!!!) always cleaning my keychain and my Xcode deleting the old certificates and profiles.. I reboot the machine, reboot Xcode, the log is correct, but... I can't upload my app!!!! Checked if my iPhone is connected (i tried with iPhone disconneted too). I checked the certificate in both my project settings "Distribuition" Configuration (duplicate of "Release" configuration) and in my target settings. Reveal in finder, compress the app and sent the zip... I tried with Application Loader and iTunes connect online.. but nothing! NOTHING!! I've spent 8 hours! And again i can't have my app uploaded!!! I'm really going crazy! Can anyone help me pleeease? Thx!

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • CakePHP: Interaction between different files/classes

    - by Alexx Hardt
    Hey, I'm cloning a commercial student management system. Students use the frontend to apply for lectures, uni staff can modify events (time, room, etc). The core of the app will be the algortihm which distributes the seats to students. I already asked about it here: How to implement a seat distribution algorithm for uni lectures Now, I found a class for that algorithm here: http://www.phpclasses.org/browse/file/10779.html I put the 'class GA' into app/vendors. I need to write a 'class Solution', which represents one object (a child, and later a parent for the evolutionary process). I'll also have to write functions mutate(), crossover() and fitness(). fitness calculates a score of a solution, based on if there are overbooked courses etc; crossover() is the crazy monkey sex function which produces a child from two parents, and mutate() modifies a child after crossover. Now, the fitness()-function needs to access a few related models, and their find()-functions. It evaluates a solution's fitness by checking e.g. if there are overbooked courses, or unfulfilled wishes, and penalizes that. Where would I put the ga.php, solution.php and the three functions? ga.php has to access the functions, but the functions have to access the models. I also don't want to call any App::import()'s from within the fitness()-function, because it gets called many thousand times when the algorithm runs. Hope someone can help me. Thanks in advance =)

    Read the article

  • Generate random number from an arbitrary weighted list

    - by Fernando
    Here's what I need to do, I'll be doing this both in PHP and JavaScript. I have a list of numbers that will range from 1 to 300-500 (I haven't set the limit yet). I will be running a drawing were 10 numbers will be picked at random from the given range. Here's the tricky part: I want some numbers to be less likely to be drawn up. A small set of those 300-500 will be flagged as "lucky numbers". For example, out of 100 drawings, most numbers have equal chances of being drawn, except for a few, that will only be picked once every 30-50 drawings. Basically I need to artificially set the probability of certain numbers to be picked while maintaining an even distribution with the rest of the numbers. The only similar thing I've found so far is this question: Generate A Weighted Random Number, the problem being that my spec has considerably more numbers (up to 500) so the weights would get very small and supposedly this could be a problem with that solution (Rejection Sampling). I'm still trying it, though, but I wonder if there other solutions. Math is not my thing so I appreciate any input. Thanks.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >