Search Results

Search found 3551 results on 143 pages for 'canonical sources'.

Page 122/143 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • How to pass dynamic parameters to .pde file

    - by Kalpana
    class Shape contains two methods drawCircle() and drawTriangle(). Each function takes different set of arguments. At present, I invoke this by calling the pde file directly. How to pass these arguments from a HTML file directly if I have to control the arguments being passed to the draw function? 1) Example.html has (current version) <script src="processing-1.0.0.min.js"></script> <canvas data-processing-sources="example.pde"></canvas> 2) Example.pde has class Shape { void drawCircle(intx, int y, int radius) { ellipse(x, y, radius, radius); } void drawTriangle(int x1, int y1, int x2, int y2, int x3, int y3) { rect(x1, y1, x2, y2, x3, y3); } } Shape shape = new Shape(); shape.drawCircle(10, 40, 70); I am looking to do something like this in my HTML file, so that I can move all the functions into a separate file and call them with different arguments to draw different shapes (much similar to how you would do it in Java) A.html <script> Shape shape = new Shape(); shape.drawCircle(10, 10, 3); </script> B.html <script> Shape shape = new Shape(); shape.drawTriangle(30, 75, 58, 20, 86, 75); </script> 2) Iam using Example2.pde has void setup() { size(200,200); background(125); fill(255); } void rectangle(int x1, int y1, int x2, int y2) { rect(x1, y1, x2, y2); } My Example2.html has var processingInstance; processingInstance.rectangle(30, 20, 55, 55); but this is not working. How to pass these parameters dynamically from html.

    Read the article

  • Botan linking error on Windows MSVC

    - by Jake Petroules
    I am trying to compile a library linking to the version of Botan from the Qt Creator sources with MSVC 2008 but am receiving the following error. MinGW compiles and links it fine. What is the issue? databasecrypto.obj:-1: error: LNK2019: unresolved external symbol "public: static unsigned int const Botan::Pipe::DEFAULT_MESSAGE" (?DEFAULT_MESSAGE@Pipe@Botan@@2IB) referenced in function "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl DatabaseCrypto::b64_encode(class Botan::SecureVector<unsigned char> const &)" (?b64_encode@DatabaseCrypto@@CA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV?$SecureVector@E@Botan@@@Z) /*! Encodes the Botan byte array \a in as a base 64 string. \param in The Botan byte array to encode. */ std::string DatabaseCrypto::b64_encode(const SecureVector<Botan::byte> &in) { Pipe pipe(new Base64_Encoder); pipe.process_msg(in); return pipe.read_all_as_string(); // <-- default parameter here is Botan::Pipe::DEFAULT_MESSAGE }

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • autotools: no rule to make target all

    - by Raffo
    I'm trying to port an application I'm developing to autotools. I'm not an expert in writing makefiles and it's a requisite for me to be able to use autotools. In particular, the structure of the project is the following: .. ../src/Main.cpp ../src/foo/ ../src/foo/x.cpp ../src/foo/y.cpp ../src/foo/A/k.cpp ../src/foo/A/Makefile.am ../src/foo/Makefile.am ../src/bar/ ../src/bar/z.cpp ../src/bar/w.cpp ../src/bar/Makefile.am ../inc/foo/ ../inc/bar/ ../inc/foo/A ../configure.in ../Makefile.am The root folder of the project contains a "src" folder containing the main of the program AND a number of subfolders containing the other sources of the program. The root of the project also contains an "inc" folder containing the .h files that are nothing more than the definitions of the classes in "src", thus "inc" reflects the structure of "src". I have written the following configure.in in the root: AC_INIT([PNAME], [1.0]) AC_CONFIG_SRCDIR([src/Main.cpp]) AC_CONFIG_HEADER([config.h]) AC_PROG_CXX AC_PROG_CC AC_PROG_LIBTOOL AM_INIT_AUTOMAKE([foreign]) AC_CONFIG_FILES([Makefile src/Makefile src/foo/Makefile src/foo/A/Makefile src/bar/Makefile]) AC_OUTPUT And the following is ../Makefile.am SUBDIRS = src and then in ../src where the main of the project is contained: bin_PROGRAMS = pname gsi_SOURCES = Main.cpp AM_CPPFLAGS = -I../../inc/foo\ -I../../inc/foo/A \ -I../../inc/bar/ pname_LDADD= foo/libfoo.a bar/libbar.a SUBDIRS = foo bar and in ../src/foo noinst_LIBRARIES = libfoo.a libfoo_a_SOURCES = \ x.cpp \ y.cpp AM_CPPFLAGS = \ -I../../inc/foo \ -I../../inc/foo/A \ -I../../inc/bar And the analogous in src/bar. The problem is that after calling automake and autoconf, when calling "make" the compilation fails. In particular, the program enters the directory src, then foo and creates libfoo.a, but the same fail for libbar.a, with the following error: Making all in bar make[3]: Entering directory `/user/Raffo/project/src/bar' make[3]: *** No rule to make target `all'. Stop. I have read the autotools documentation, but I'm not able to find a similar example to the one I am working on. Unfortunately I can't change the directory structure as this is a fixed requisite of the project I'm working on. I don't know if you can help me or give me any hint, but maybe you can guess the error or give me a link to a similar structured example. Thank you.

    Read the article

  • How can I pass a const array or a variable array to a function in C?

    - by CSharperWithJava
    I have a simple function Bar that uses a set of values from a data set that is passed in in the form of an Array of data structures. The data can come from two sources: a constant initialized array of default values, or a dynamically updated cache. The calling function determines which data is used and should be passed to Bar. Bar doesn't need to edit any of the data and in fact should never do so. How should I declare Bar's data parameter so that I can provide data from either set? union Foo { long _long; int _int; } static const Foo DEFAULTS[8] = {1,10,100,1000,10000,100000,1000000,10000000}; static Foo Cache[8] = {0}; void Bar(Foo* dataSet, int len);//example function prototype Note, this is C, NOT C++ if that makes a difference; Edit Oh, one more thing. When I use the example prototype I get a type qualifier mismatch warning, (because I'm passing a mutable reference to a const array?). What do I have to change for that?

    Read the article

  • "You have already activated" message even when using bundle exec

    - by juanpastas
    I am installing gems in my Gemfile in shared path as Capistrano does by default, and when I run: bundle exec rake assets:precompile RAILS_ENV=production I get: You have already activated rake 0.9.2.2, but your Gemfile requires rake 10.0.4. Using bundle exec may solve this. See that: cat Gemfile.lock | grep rake returns: rake (>= 0.8.7) rake (10.0.4) This is my gem environment output: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.9.3 (2013-06-27 patchlevel 448) [x86_64-linux] - INSTALLATION DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - RUBY EXECUTABLE: /opt/bitnami/ruby/bin/ruby - EXECUTABLE DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gemhome" => "/home/bitnami/my_app/shared/bundle/ruby/1.9.1/" - "gempath" => ["/home/bitnami/my_app/shared/bundle/ruby/1.9.1/"] - REMOTE SOURCES: - http://rubygems.org/ Update which -a rake /opt/bitnami/rvm/bin/rake /opt/bitnami/ruby/bin/rake Update 2 I tried giving full path to rake, but same problem

    Read the article

  • Ideas Related to Subset Sum with 2,3 and more integers

    - by rolandbishop
    I've been struggling with this problem just like everyone else and I'm quite sure there has been more than enough posts to explain this problem. However in terms of understanding it fully, I wanted to share my thoughts and get more efficient solutions from all the great people in here related to Subset Sum problem. I've searched it over the Internet and there is actually a lot sources but I'm really willing to re-implement an algorithm or finding my own in order to understand fully. The key thing I'm struggling with is the efficiency considering the set size will be large. (I do not have a limit, just conceptually large). The two phases I'm trying to implement ideas on is finding two numbers that are equal to given integer T, finding three numbers and eventually K numbers. Some ideas I've though; For the two integer part I'm thing basically sorting the array O(nlogn) and for each element in the array searching for its negative value. (i.e if the array element is 3 searching for -3). Maybe a hash table inclusion could be better, providing a O(1) indexing the element? For the three or more integers I've found an amazing blog post;http://www.skorks.com/2011/02/algorithms-a-dropbox-challenge-and-dynamic-programming/. However even the author itself states that it is not applicable for large numbers. So I was for 2 and 3 and more integers what ideas could be applied for the subset problem. I'm struggling with setting up a dynamic programming method that will be efficient for the large inputs as well.

    Read the article

  • F# and statically checked union cases

    - by Johan Jonasson
    Soon me and my brother-in-arms Joel will release version 0.9 of Wing Beats. It's an internal DSL written in F#. With it you can generate XHTML. One of the sources of inspiration have been the XHTML.M module of the Ocsigen framework. I'm not used to the OCaml syntax, but I do understand XHTML.M somehow statically check if attributes and children of an element are of valid types. We have not been able to statically check the same thing in F#, and now I wonder if someone have any idea of how to do it? My first naive approach was to represent each element type in XHTML as a union case. But unfortunately you cannot statically restrict which cases are valid as parameter values, as in XHTML.M. Then I tried to use interfaces (each element type implements an interface for each valid parent) and type constraints, but I didn't manage to make it work without the use of explicit casting in a way that made the solution cumbersome to use. And it didn't feel like an elegant solution anyway. Today I've been looking at Code Contracts, but it seems to be incompatible with F# Interactive. When I hit alt + enter it freezes. Just to make my question clearer. Here is a super simple artificial example of the same problem: type Letter = | Vowel of string | Consonant of string let writeVowel = function | Vowel str -> sprintf "%s is a vowel" str I want writeVowel to only accept Vowels statically, and not as above, check it at runtime. How can we accomplish this? Does anyone have any idea? There must be a clever way of doing it. If not with union cases, maybe with interfaces? I've struggled with this, but am trapped in the box and can't think outside of it.

    Read the article

  • .NET consumer of ActiveX throwing TargetParameterCountException

    - by DevSolo
    I have a .NET (3.5 w/ Dev Studio 2008) app that hosts a visual Active X (written in C++ w/ Dev Studio 2003). Have access to all sources, but can't easily move the Active X control up to 2008. This as worked fine in the past. Made some changes to the Active X control and now, when calling one method on the Active X, I'm getting a TargetParameterCountException 100% of the time. The signature of the Active X method is: LONG CMyActive::License(LPCTSTR string1, LPCTSTR string2, LONG long1, LPCTSTR string3, LPCTSTR string4); When viewing the method in object browser of reflector, .NET sees it as: public virtual int License(string string1, string string2, int long1, string string3, string string4) I renamed the parameters for demonstration purpose (boss gets twitchy about any code). I left the method name, as it could be relevant. There are method calls prior that work. I just can't seen to figure out why I'm all of a sudden getting this exception. The HRESULT is 0x8002000e and a quick search seems to indicate that's a general one. Thanks to all for reading.

    Read the article

  • Cocoa framework development: sharing between projects

    - by e.James
    I am currently developing a handful of similar Cocoa desktop apps. In an effort to share code between them, I have identified a set of core classes and functions that can be common across all of these applications. I would like to bundle this common code into a framework which all of my current applications (and any future ones) can link against. Now, here's the hard part: I'm going to be developing this framework as I go, so I need each of my desktop apps to have a reference to it, but I want to be able to edit the framework source code from within each of the app projects and have the framework automatically rebuilt as required. For example, let's say I have the Xcode project for DesktopAppNumberOne open, and I decide that one of my framework classes needs to be changed. I would like to: Open and edit the source file for that framework class without having to open the framework project in Xcode. Hit "build" on DesktopAppNumberOne, and see the framework rebuilt first (because one of its sources has changed), then see parts of DesktopAppNumberOne rebuilt (because one of the frameworks it links against has changed). I can see how to do this with only one app and one framework, but I'm having trouble figuring out how to do it with multiple apps that share a single framework. Has anyone had success with this approach? Am I perhaps going about this the wrong way? Any help would be appreciated.

    Read the article

  • Servlet receiving data both in ISO-8859-1 and UTF-8. How to URL-decode?

    - by AJPerez
    I've a web application (well, in fact is just a servlet) which receives data from 3 different sources: Source A is a HTML document written in UTF-8, and sends the data via <form method="get">. Source B is written in ISO-8859-1, and sends the data via <form method="get">, too. Source C is written in ISO-8859-1, and sends the data via <a href="http://my-servlet-url?param=value&param2=value2&etc">. The servlet receives the request params and URL-decodes them using UTF-8. As you can expect, A works without problems, while B and C fail (you can't URL-decode in UTF-8 something that's encoded in ISO-8859-1...). I can make slight modifications to B and C, but I am not allowed to change them from ISO-8859-1 to UTF-8, which would solve all the problems. In B, I've been able to solve the problem by adding accept-charset="UTF-8" to the <form>. So the <form> sends the data in UTF-8 even with the page being ISO. What can I do to fix C? Alternatively, is there any way to determine the charset on the servlet, so I can call URL-decode with the right encoding in each case?

    Read the article

  • Is the Java classpath final after JVM startup?

    - by Jens
    Hi, I have read a lot about the Java class loading process lately. Often I came across texts that claimed that it is not possible to add classes to the classpath during runtime and load them without class loader hackery (URLClassLoaders etc.) As far as I know classes are loaded dynamically. That means their bytecode representation is only loaded and transformed to a java.lang.Class object when needed. So shouldn't it be possible to add a JAR or *.class file to the classpath after the JVM started and load those classes, provided they haven't been loaded yet? (To be clear: In this case the classpath is simple folder on the filesystem. "Adding a JAR or *.class file" simply means dropping them in this folder.) And if not, does that mean that the classpath is searched on JVM startup and all fully qualified names of the found classes are cached in an internal "list"? It would be nice of you if you could point me to some sources in your answers. Preferably the offical SUN documentation: Sun JVM Spec. I have read the spec but could not find anything about the classpath and if it's finalized on JVM startup. P.s. This is a theoretical question. I just want to know if it is possible. There is nothing practical I want to achieve. There is just my thirst for knowledge :)

    Read the article

  • Why should I override hashCode() when I override equals() method?

    - by Bragaadeesh
    Ok, I have heard from many places and sources that whenever I override the equals() method, I need to override the hashCode() method as well. But consider the following piece of code package test; public class MyCustomObject { int intVal1; int intVal2; public MyCustomObject(int val1, int val2){ intVal1 = val1; intVal2 = val2; } public boolean equals(Object obj){ return (((MyCustomObject)obj).intVal1 == this.intVal1) && (((MyCustomObject)obj).intVal2 == this.intVal2); } public static void main(String a[]){ MyCustomObject m1 = new MyCustomObject(3,5); MyCustomObject m2 = new MyCustomObject(3,5); MyCustomObject m3 = new MyCustomObject(4,5); System.out.println(m1.equals(m2)); System.out.println(m1.equals(m3)); } } Here the output is true, false exactly the way I want it to be and I dont care of overriding the hashCode() method at all. This means that hashCode() overriding is an option rather being a mandatory one as everyone says. I want a second confirmation.

    Read the article

  • eclipse show errors but compiles when external makefile

    - by Anthony
    I have some c++ code using CImg and Eigen libraries. At the c++ code I define a plugin like this #define cimg_plugin1 "my_plugin.h" #include "CImg.h" The plugin contains many method definitions that are used at the c++ code. I also have a makefile that when called from the command line (./make), allows me to compile everything, and generates an executable. I want to import this code into a new Eclipse project, and I do it so: NewProjectC++ projectmakefile projectempty project unmark "Use default location", and select the folder containing my code at the filesystem ProjectpropertiesC/C++ Buildunmark "Use default build command" and set it to use my makefile Also in project propertiesC/C++ GeneralPaths and SymbolsAdd paths to folders containing Eigen and CImg Rebuild index Clean project Restart eclipse When I build the project, eclipse tells me I have more than 1000 errors in "my_plugin.h", but it is capable to generate the executable. Even though, I would like to get rid of this errors, because they are annoying. Also, if I want to open the declaration of CImg methods used at the plugin, I can't. I know it has been asked before, but any of the solutions I found were satisfactory for me (most of them enumerated at the previous list). The sources I visited are the following, and I would be really happy if you find and tell me others I didn't see. Eclipse shows errors but project compile fine eclipse C project shows errors (Symbol could not be resolved) but it compiles Eclipse CDT shows some errors, but the project is successfully built http://www.eclipse.org/forums/index.php/t/247954/

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • What are the advantages of using J2EE over ASP.net?

    - by m_oLogin
    We are currently planning to launch a couple of internal web projects in the future. Our company's dev teams are mostly experienced in J2EE and have worked with it for years. Today, we have the choice of launching a couple of our projects on .net. I have checked out a couple of sources on the net, and it seems like the "J2EE vs ASP.net" combat brings out as much discord as the overseen "Apple vs Microsoft" or "Free Eclipse vs Visual Studio"... Nevertheless, I have been somewhat quite impressed with ASP.net's abilities to create great things with huge simplicity (for ex. asp.net ajax's demos). No more tons of xmls to play with, no more tons of frameworks to configure (we usually use the famous combo struts/spring/hibernate)... It just seemed to me that ASP.net had some good advantages over J2EE, but then again, I may speak by ignorance. What I want to know is this : What are the real advantages of using J2EE over ASP.net? Is there anything that cannot be done in ASP.net that can be done in J2EE? Once the frameworks are all in place and configured, is it faster to develop apps in J2EE than it is in .net? Are the applications generally easier to maintain in J2EE than in ASP.net? Is it worth it for some developpers to leave their J2EE knowledge on the side and move on to ASP.net if it does exactly the same thing?

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • jQuery's draggable grid

    - by Art
    It looks like that the 'grid' option in the constructor of Draggable is relatively bound to the original coordinates of the element being dragged - so simply put, if you have three draggable divs with their top set respectively to 100, 200, 254 pixels relative to their parent: <div class="parent-div" style="position: relative;"> <div id="div1" class="draggable" style="top: 100px; position: absolute;"></div> <div id="div2" class="draggable" style="top: 200px; position: absolute;"></div> <div id="div3" class="draggable" style="top: 254px; position: absolute;"></div> </div> Adn all of them are getting enabled for dragging with 'grid' set to [1, 100]: draggables = $('.draggable'); $.each(draggables, function(index, elem) { $(elem).draggable({ containment: $('#parent-div'), opacity: 0.7, revert: 'invalid', revertDuration: 300, grid: [1, 100], refreshPositions: true }); }); Problem here is that as soon as you drag div3, say, down, it's top is increased by 100, moving it to 354px instead of being increased by just mere 46px (254 + 46 = 300), which would get it to the next stop in the grid - 300px, if we are looking at the parent-div as a point of reference and "grid holder". I had a look at the draggable sources and it seem to be an in-built flaw - they just do all the calculations relative to the original position of the draggable element. I would like to avoid monkey-patching the code of draggable library and what I am really looking for here is the way how to make the Draggable calculate the grid positions relative to containing parent. However if monkey-patching is unavoidable, I guess I'll have to live with it. Thanks!

    Read the article

  • How do I find all paths through a set of given nodes in a DAG?

    - by Hanno Fietz
    I have a list of items (blue nodes below) which are categorized by the users of my application. The categories themselves can be grouped and categorized themselves. The resulting structure can be represented as a Directed Acyclic Graph (DAG) where the items are sinks at the bottom of the graph's topology and the top categories are sources. Note that while some of the categories might be well defined, a lot is going to be user defined and might be very messy. Example: On that structure, I want to perform the following operations: find all items (sinks) below a particular node (all items in Europe) find all paths (if any) that pass through all of a set of n nodes (all items sent via SMTP from example.com) find all nodes that lie below all of a set of nodes (intersection: goyish brown foods) The first seems quite straightforward: start at the node, follow all possible paths to the bottom and collect the items there. However, is there a faster approach? Remembering the nodes I already passed through probably helps avoiding unnecessary repetition, but are there more optimizations? How do I go about the second one? It seems that the first step would be to determine the height of each node in the set, as to determine at which one(s) to start and then find all paths below that which include the rest of the set. But is this the best (or even a good) approach? The graph traversal algorithms listed at Wikipedia all seem to be concerned with either finding a particular node or the shortest or otherwise most effective route between two nodes. I think both is not what I want, or did I just fail to see how this applies to my problem? Where else should I read?

    Read the article

  • Python optimization problem?

    - by user342079
    Alright, i had this homework recently (don't worry, i've already done it, but in c++) but I got curious how i could do it in python. The problem is about 2 light sources that emit light. I won't get into details tho. Here's the code (that I've managed to optimize a bit in the latter part): import math, array import numpy as np from PIL import Image size = (800,800) width, height = size s1x = width * 1./8 s1y = height * 1./8 s2x = width * 7./8 s2y = height * 7./8 r,g,b = (255,255,255) arr = np.zeros((width,height,3)) hy = math.hypot print 'computing distances (%s by %s)'%size, for i in xrange(width): if i%(width/10)==0: print i, if i%20==0: print '.', for j in xrange(height): d1 = hy(i-s1x,j-s1y) d2 = hy(i-s2x,j-s2y) arr[i][j] = abs(d1-d2) print '' arr2 = np.zeros((width,height,3),dtype="uint8") for ld in [200,116,100,84,68,52,36,20,8,4,2]: print 'now computing image for ld = '+str(ld) arr2 *= 0 arr2 += abs(arr%ld-ld/2)*(r,g,b)/(ld/2) print 'saving image...' ar2img = Image.fromarray(arr2) ar2img.save('ld'+str(ld).rjust(4,'0')+'.png') print 'saved as ld'+str(ld).rjust(4,'0')+'.png' I have managed to optimize most of it, but there's still a huge performance gap in the part with the 2 for-s, and I can't seem to think of a way to bypass that using common array operations... I'm open to suggestions :D

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • How to setup relationships in contact Database

    - by Walt
    I am currently embarking on a new venture to learn PHP and MySQL. I have done some simple databases in the past using Access, but this one is to be a web-centric database for tracking a myriad of data including contacts and project information. I will need to link the various tables in various relationships, and I am not sure the best way to do that. Since I am just starting out with PHP/MySQL I am researching online sources for learning as much as possible. If anyone has recommendations on books or websites, I would appreciate it. In setting up my tables, one major area that I am concerned with is contacts. I will have a variety of contacts that include: employees, clients, vendors, subcontractors, etc.. and a single contact can be multiple types and each type would have various additional fields that pertain to them. My thought was to have one contacts table that links to other tables for the various contact types. I'm not sure which field type or setup of table options are best... Thoughts? This scenario will likely play out in other areas of the database as well for projects and products. Any pointers/direction would be appreciated. WES

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • ASP.NET SqlDataSource update and create FK reference

    - by William
    The short version: I have a grid view bound to a data source which has a SelectCommand with a left join in it because the FK can be null. On Update I want to create a record in the FK table if the FK is null and then update the parent table with the new records ID. Is this possible to do with just SqlDataSources? The detailed version: I have two tables: Company and Address. The column Company.AddressId can be null. On my ascx page I am using a SqlDataSource to select a left join of company and address and a GridView to display the results. By having my UpdateCommand and DeleteCommand of the SqlDataSource execute two statements separated by a semi-colon I am able to use the GridView's Edit and Delete functionality to update both table simultaneously. The problem I have is when the Company.AddressId is null. What I need to have happen is have the data source create a record in the Address table and then update the Company table with the new Address.ID then proceed with the update as usual. I would like to do this with just data sources if possible for consistency/simplicity sake. Is it possible to have my data source do this, or perhaps add a second data source to the page to handle some of this? Once I have that working I can probably figure out how to make it work with the InsertCommand as well but if you are on a roll and have an answer for how to make that fly as well feel free to provide it. Thanks.

    Read the article

  • Java-Eclipse-Spring 3.1 - the fastest way to get familiar with this set

    - by Leron
    I, know almost all of you at some point of your life as a programmer get to the point where you know (more or less) different technologies/languages/IDEs and a times come when you want to get things together and start using them once - more efficient and second - more closely to the real life situation where in fact just knowing Java, or some experience with Eclipse doesn't mean nothing, and what makes you a programmer worth something is the ability to work with the combination of 2 or more combinations. Having this in mind here is my question - what do you think is the optimal way of getting into Java+Eclipse+Spring3.1 world. I've read, and I've read a lot. I started writing real code but almost every step is discovering the wheel again and again, wondering how to do thing you know are some what trivial, but you've missed that one article where this topic was discussed and so on. I don't mind for paying for a good tutorial like for example, after a bit of research I decided that instead of losing a lot of time getting the different parts together I'd rather pay for the videos in http://knpuniversity.com/screencast/starting-in-symfony2-tutorial and save myself a lot of time (I hope) and get as fast as possible to writing a real code instead of wondering what do what and so on. But I find it much more difficult to find such sources of info especially when you want something more specific as me and that's the reason to ask this question. I know a lot of you go through the hard way, and I won't give up if I have to do the same, but to be honest I really hope to get post with good tutorials on the subject (paid or not) because in my situation time is literally money. Thanks Leron

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >