Search Results

Search found 17068 results on 683 pages for 'array merge'.

Page 68/683 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • 'array bound is not an integer constant' when defining size of array in class, using an element of a const array

    - by user574733
    #ifndef QWERT_H #define QWERT_H const int x [] = {1, 2,}; const int z = 3; #endif #include <iostream> #include "qwert.h" class Class { int y [x[0]]; //error:array bound is not an integer constant int g [z]; //no problem }; int main () { int y [x[0]]; //no problem Class a_class; } I can't figure out why this doesn't work. Other people with this problem seem to be trying to dynamically allocate arrays. Any help is much appreciated.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Merge FDF data into a PDF file using PHP

    - by James
    Is it possible to merge FDF data with a PDF file using PHP alone? Or is there no option but to use a 3rd party command line tool to achieve this? If that is the case can someone point me in the direction of one? I am currently outputting the FDF file to the browser in the hope that it will redirect the user to the filled in PDF but for some people that is not the case. The FDF contents is being output to the screen, even though I am using header('Content-type: application/vnd.fdf');

    Read the article

  • qsort on an array of pointers to Objective-C objects

    - by ElBueno
    I have an array of pointers to Objective-C objects. These objects have a sort key associated with them. I'm trying to use qsort to sort the array of pointers to these objects. However, the first time my comparator is called, the first argument points to the first element in my array, but the second argument points to garbage, giving me an EXC_BAD_ACCESS when I try to access its sort key. Here is my code (paraphrased): - (void)foo:(int)numThingies { Thingie **array; array = malloc(sizeof(deck[0])*numThingies); for(int i = 0; i < numThingies; i++) { array[i] = [[Thingie alloc] initWithSortKey:(float)random()/RAND_MAX]; } qsort(array[0], numThingies, sizeof(array[0]), thingieCmp); } int thingieCmp(const void *a, const void *b) { const Thingie *ia = (const Thingie *)a; const Thingie *ib = (const Thingie *)b; if (ia.sortKey > ib.sortKey) return 1; //ib point to garbage, so ib.sortKey produces the EXC_BAD_ACCESS else return -1; } Any ideas why this is happening?

    Read the article

  • Merge items in nanoc

    - by Gordon Potter
    I have been trying to use nanoc for generating a static website. I need to organize a complex arrangement pages I want to keep my content DRY. How does the concept of includes or merges work within the nanoc system? I have read the docs but I can't seem to find what I want. For example: how can I take two partial content items and merge them together into a new content item. In staticmatic you can do some like the following inside your page. = partial('partials/shared/navigation') How would a similar convention work within nanoc?

    Read the article

  • Receiving "MERGE" 200 OK error when committing using trac-post-commit-hook

    - by Lyon Blecher
    When running a commit with the trac-post-commit-hook I receive a MERGE 200 OK error, I understand that this means that the commit has succeeded on the server but the file status has not updated on my local machine. But I can't find anyway to fix this issue. Would this be a problem with my setup or something in the script. I'm using stock standard script from the trac site, I'm committing through tortoiseSVN to VisualSVN Server which is hosted on a windows 2008 server. When I run the script through a command line I receive no errors, I only receive this error through TortoiseSVN.

    Read the article

  • Atomic INSERT/SELECT in HSQLDB

    - by PartlyCloudy
    Hello, I have the following hsqldb table, in which I map UUIDs to auto incremented IDs: SHORT_ID (BIG INT, PK, auto incremented) | UUID (VARCHAR, unique) Create command: CREATE TABLE table (SHORT_ID BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, UUID VARCHAR(36) UNIQUE) In order to add new pairs concurrently, I want to use the atomic MERGE INTO statement. So my (prepared) statement looks like this: MERGE INTO table USING (VALUES(CAST(? AS VARCHAR(36)))) AS v(x) ON ID_MAP.UUID = v.x WHEN NOT MATCHED THEN INSERT VALUES v.x When I execute the statement (setting the placeholder correctly), I always get a Caused by: org.hsqldb.HsqlException: row column count mismatch Could you please give me a hint, what is going wrong here? Thanks in advance.

    Read the article

  • how to merge xml string to main xml document object

    - by CliffC
    how can i merge the following xml string <employee> <name>cliff</name> </employee> to my existing xml document object XmlDocument xmlDoc = new XmlDocument(); XmlElement xmlCompany = xmlDoc.CreateElement("Company"); the final output should look like <Company> <employee> <name>cliff</name> </employee> </Company> thanks

    Read the article

  • Subclipse > Accidental Merge Conflict Resolution

    - by DTS
    I'm trying to merge changes from one branch into another using Subclipse. On a particular file in a particular subdirectory, I had a file conflict and edited the conflicts via the context menu option for this. However, when I went to resolve the conflict I apparently chose the wrong option and was left with the original unmerged file in my branch. Since then, I can no longer get this file back into a conflicted state so I can resolve this issue properly. I've tried deleting the file and the directory that contains it, to no avail. Any ideas?

    Read the article

  • Need to merge multiple pdf's into a single PDF with Table Of Contents sections

    - by Jason
    Will have 50-100 single PDF's that we'll be generating with a php script. PDF's are generally grouped into groups of 10-20. Each group needs to have it's own Table of Contents or Index, and then there also needs to be a Master Table of Contents or Index at the beginning. Or if that is too difficult we could get away with a single Table of Contents at the beginning. What's the best way to go about this? Will we need to create the Table of Contents and then export that to PDF and append it to the beginning and mash the rest of the files after that? Or is there a better solution? And what's the best tool for us to merge the pdf's? Will be running on a Linux server.

    Read the article

  • MERGE -v- UPSERT

    - by Kevin Ross
    Hi, I have an application I’m writing in access with a SQL server backend. One of the most heavily used parts is where the users selects an answer to a question, a stored procedure is then fired which sees if an answer has already been given, if it has an UPDATE is executed, if not an INSERT is executed. This works just fine but now we have upgraded to SQL server 2008 express I was wondering if it would be better/quicker/more efficient to rewrite this SP to use the new MERGE command. Does anyone have any idea if this is faster than doing a SELECT followed by either an INSERT or UPDATE?

    Read the article

  • Can't find how to import as one object or how to merge

    - by Aaron
    I need write a script in blender that creates some birds which fly around some obstacles. The problem is that I need to import a pretty large Collada model (a building) which consists of multiple objects. The import works fine, but the the building is not seen as 1 object. I need to resize and move this building, but I can only get the last object in the building (which is a camera)... Does anyone know how to merge this building in 1 object, group, variable... so I can resize and move it correctly? Part of the code I used: bpy.ops.wm.collada_import(filepath="C:\\Users\\me\\building.dae") building= bpy.context.object building.scale = (100, 100, 100) building.name = "building"

    Read the article

  • How to merge two different Makefiles?

    - by martijnn2008
    I have did some reading on "Merging Makefiles", one suggest I should leave the two Makefiles separate in different folders [1]. For me this look counter intuitive, because I have the following situation: I have 3 source files (main.cpp flexibility.cpp constraints.cpp) one of them (flexibility.cpp) is making use of the COIN-OR Linear Programming library (Clp) When installing this library on my computer it makes sample Makefiles, which I have adjust the Makefile and it currently makes a good working binary. # Copyright (C) 2006 International Business Machines and others. # All Rights Reserved. # This file is distributed under the Eclipse Public License. # $Id: Makefile.in 726 2006-04-17 04:16:00Z andreasw $ ########################################################################## # You can modify this example makefile to fit for your own program. # # Usually, you only need to change the five CHANGEME entries below. # ########################################################################## # To compile other examples, either changed the following line, or # add the argument DRIVER=problem_name to make DRIVER = main # CHANGEME: This should be the name of your executable EXE = clp # CHANGEME: Here is the name of all object files corresponding to the source # code that you wrote in order to define the problem statement OBJS = $(DRIVER).o constraints.o flexibility.o # CHANGEME: Additional libraries ADDLIBS = # CHANGEME: Additional flags for compilation (e.g., include flags) ADDINCFLAGS = # CHANGEME: Directory to the sources for the (example) problem definition # files SRCDIR = . ########################################################################## # Usually, you don't have to change anything below. Note that if you # # change certain compiler options, you might have to recompile the # # COIN package. # ########################################################################## COIN_HAS_PKGCONFIG = TRUE COIN_CXX_IS_CL = #TRUE COIN_HAS_SAMPLE = TRUE COIN_HAS_NETLIB = #TRUE # C++ Compiler command CXX = g++ # C++ Compiler options CXXFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # additional C++ Compiler options for linking CXXLINKFLAGS = -Wl,--rpath -Wl,/home/martijn/Downloads/COIN/coin-Clp/lib # C Compiler command CC = gcc # C Compiler options CFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # Sample data directory ifeq ($(COIN_HAS_SAMPLE), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" CFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" else CXXFLAGS += -DSAMPLEDIR=\"\" CFLAGS += -DSAMPLEDIR=\"\" endif endif # Netlib data directory ifeq ($(COIN_HAS_NETLIB), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" CFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" else CXXFLAGS += -DNETLIBDIR=\"\" CFLAGS += -DNETLIBDIR=\"\" endif endif # Include directories (we use the CYGPATH_W variables to allow compilation with Windows compilers) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) INCL = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --cflags clp` else INCL = endif INCL += $(ADDINCFLAGS) # Linker flags ifeq ($(COIN_HAS_PKGCONFIG), TRUE) LIBS = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --libs clp` else ifeq ($(COIN_CXX_IS_CL), TRUE) LIBS = -link -libpath:`$(CYGPATH_W) /home/martijn/Downloads/COIN/coin-Clp/lib` libClp.lib else LIBS = -L/home/martijn/Downloads/COIN/coin-Clp/lib -lClp endif endif # The following is necessary under cygwin, if native compilers are used CYGPATH_W = echo # Here we list all possible generated objects or executables to delete them CLEANFILES = clp \ main.o \ flexibility.o \ constraints.o \ all: $(EXE) .SUFFIXES: .cpp .c .o .obj $(EXE): $(OBJS) bla=;\ for file in $(OBJS); do bla="$$bla `$(CYGPATH_W) $$file`"; done; \ $(CXX) $(CXXLINKFLAGS) $(CXXFLAGS) -o $@ $$bla $(LIBS) $(ADDLIBS) clean: rm -rf $(CLEANFILES) .cpp.o: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .cpp.obj: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` .c.o: $(CC) $(CFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .c.obj: $(CC) $(CFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` The other Makefile compiles a lot of code and makes use of bison and flex. This one is also made by someone else. I am able to alter this Makefile when I want to add some code. This Makefile also makes a binary. CFLAGS=-Wall LDLIBS=-LC:/GnuWin32/lib -lfl -lm LSOURCES=lex.l YSOURCES=grammar.ypp CSOURCES=debug.cpp esta_plus.cpp heap.cpp main.cpp stjn.cpp timing.cpp tmsp.cpp token.cpp chaining.cpp flexibility.cpp exceptions.cpp HSOURCES=$(CSOURCES:.cpp=.h) includes.h OBJECTS=$(LSOURCES:.l=.o) $(YSOURCES:.ypp=.tab.o) $(CSOURCES:.cpp=.o) all: solver solver: CFLAGS+=-g -O0 -DDEBUG solver: $(OBJECTS) main.o debug.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) solver.release: CFLAGS+=-O5 solver.release: $(OBJECTS) main.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) %.o: %.cpp g++ -c $(CFLAGS) -o $@ $< lex.cpp: lex.l grammar.tab.cpp grammar.tab.hpp flex -o$@ $< %.tab.cpp %.tab.hpp: %.ypp bison --verbose -d $< ifneq ($(LSOURCES),) $(LSOURCES:.l=.cpp): $(YSOURCES:.y=.tab.h) endif -include $(OBJECTS:.o=.d) clean: rm -f $(OBJECTS) $(OBJECTS:.o=.d) $(YSOURCES:.ypp=.tab.cpp) $(YSOURCES:.ypp=.tab.hpp) $(YSOURCES:.ypp=.output) $(LSOURCES:.l=.cpp) solver solver.release 2>/dev/null .PHONY: all clean debug release Both of these Makefiles are, for me, hard to understand. I don't know what they exactly do. What I want is to merge the two of them so I get only one binary. The code compiled in the second Makefile should be the result. I want to add flexibility.cpp and constraints.cpp to the second Makefile, but when I do. I get the problem following problem: flexibility.h:4:26: fatal error: ClpSimplex.hpp: No such file or directory #include "ClpSimplex.hpp" So the compiler can't find the Clp library. I also tried to copy-paste more code from the first Makefile into the second, but it still gives me that same error. Q: Can you please help me with merging the two makefiles or pointing out a more elegant way? Q: In this case is it indeed better to merge the two Makefiles? I also tried to use cmake, but I gave upon that one quickly, because I don't know much about flex and bison.

    Read the article

  • How to merge two tables based on common column and sort the results by date

    - by techiepark
    Hello friends, I have two mysql tables and i want to merge the results of these two tables based on the common column rev_id. The merged results should be sorted by the date of two tables. Please help me. CREATE TABLE `reply` ( `id` int(3) NOT NULL auto_increment, `name` varchar(25) NOT NULL default '', `member_id` varchar(45) NOT NULL, `rev_id` int(3) NOT NULL default '0', `description` text, `post_date` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `flag` char(2) NOT NULL default 'N', PRIMARY KEY (`id`), KEY `member_id` (`member_id`) ) ENGINE=MyISAM; CREATE TABLE `comment` ( `com_id` int(8) NOT NULL auto_increment, `rev_id` int(5) NOT NULL default '0', `member_id` varchar(50) NOT NULL, `comm_desc` text NOT NULL, `date_created` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, PRIMARY KEY (`com_id`), KEY `member_id` (`member_id`) ) ENGINE=MyISAM;

    Read the article

  • How to merge two FBOs?

    - by DevDevDev
    OK so I have 4 buffers, 3 FBOs and a render buffer. Let me explain. I have a view FBO, which will store the scene before I render it to the render buffer. I have a background buffer, which contains the background of the scene. I have a user buffer, which the user manipulates. When the user makes some action I draw to the user buffer, using some blending. Then to redraw the whole scene what I want to do is clear the view buffer, draw the background buffer to the view buffer, change the blending, then draw the user buffer to the view buffer. Finally render the view buffer to the render buffer. However I can't figure out how to draw a FBO to another FBO. What I want to do is essentially merge and blend two FBOs, but I can't figure out how! I'm very new to OpenGL ES, so thanks for all the help.

    Read the article

  • @OneToMany property null in Entity after (second) merge

    - by iNPUTmice
    Hi, I'm using JPA (with Hibernate) and Gilead in a GWT project. On the server side I have this method and I'm calling this method twice with the same "campaign". On the second call it throws a null pointer exception in line 4 "campaign.getTextAds()" public List<WrapperTextAd> getTextAds(WrapperCampaign campaign) { campaign = em.merge(campaign); System.out.println("getting textads for "+campaign.getName()); for(WrapperTextAd textad: campaign.getTextAds()) { //do nothing } return new ArrayList<WrapperTextAd>(campaign.getTextAds()); } The code in WrapperCampaign Entity looks like this @OneToMany(mappedBy="campaign") public Set<WrapperTextAd> getTextAds() { return this.textads; }

    Read the article

  • Recover history from foolish git-svn merge

    - by Gregg Lind
    the players: master: the svn branch (actual, not local trackign) mybranch: a local branch My mistake: [master] git svn rebase [master] git merge mybranch [master] git svn dcommit I did this twice. Is there a way I can remedy all this? I was thinking something like: git checkout --hard [commit before the merging] git dcommit # that to the svn? git rebase mybranch git dcommit But this doesn't seem to work. (I know I should a. working from a local tracking branch and b. have rebased rather than merged) I'm in the frantic / willing to send beer to respondents stage :)

    Read the article

  • Merge overlapping triangles into a polygon

    - by nornagon
    I've got a bunch of overlapping triangles from a 3D model projected into a 2D plane. I need to merge each island of touching triangles into a closed, non-convex polygon. The resultant polygons shouldn't have any holes in them (since the source data doesn't). Many of the source triangles share (floating point identical) edges with other triangles in the source data. What's the easiest way to do this? Performance isn't particularly important, since this will be done at design time.

    Read the article

  • In PHP... best way to turn string representation of a folder structure into nested array

    - by Greg Frommer
    Hi everyone, I looked through the related questions for a similar question but I wasn't seeing quite what I need, pardon if this has already been answered already. In my database I have a list of records that I want represented to the user as files inside of a folder structure. So for each record I have a VARCHAR column called "FolderStructure" that I want to identify that records place in to the folder structure. The series of those flat FolderStructure string columns will create my tree structure with the folders being seperated by backslashes (naturally). I didn't want to add another table just to represent a folder structure... The 'file' name is stored in a separate column so that if the FolderStructure column is empty, the file is assumed to be at the root folder. What is the best way to turn a collection of these records into a series of HTML UL/LI tags... where each LI represents a file and each folder structure being an UL embedded inside it's parent?? So for example: file - folderStructure foo - bar - firstDir blue - firstDir/subdir would produce the following HTML: <ul> <li>foo</li> <ul> <li> bar </li> <ul> <li> blue </li> </ul> </ul> </ul> Thanks

    Read the article

  • linq merge elements with orderby

    - by Sanja Melnichuk
    Hi all. I need to select all terminals sorted by Attributes[2] // get all query IQueryable<ITerminal> itt = repository.Terminals.GetQuery(); // get all with Attributes[2] IQueryable<ITerminal> itt2 = itt .Select(item => new { attrs = item.Attributes[2], term = item }) .OrderBy(x => x.attrs) .Select(x => x.term); Now how to select from itt all terminals but ordered by itt2 results? How to order them by itt2? Example: itt = c,d,r,a,w,b itt2 = a,b,c // after select and after merge: a,b,c d,r,w Thanks all

    Read the article

  • How do I merge a local branch into TFS

    - by Johnny
    hi, I did a stupid thing and branched my project on my local disk instead of doing it on the TFS. So now I have two projects on my disk: the old one which has TFS bindings and the new, which doesn't. I want to merge those changes back into the TFS project. How would I go about doing that? I can't do Compare because my local branch has no TFS bindings. There should be some way to compare the differences between the two projects locally and then meld the differences into the old project and check-in, but I can't find an easy way of doing that. Any other solutions?

    Read the article

  • Reverting after a merge, bad idea?

    - by Clean
    Hi, I'm a newcomer to subversion. Recently, I've done some development in two different branches, where one of the branches was a branch of the other branch. I've merged down some changes from the first branch down to the trunk. However, when trying to merge down changes from the other branch to trunk, everything went haywire. That is, I've had a lot of conflicts, some of which I resolved (but not commited) and some of which are not. What worse is, a lot of the changes I made to the branch were for some reason not merged into the trunk. Now, my only question is, can I just do a revert on my working copy to return the trunk into its previous state? That is, will I mess something up by doing this? My taught is to start all over again and do it more carefully "by hand". Thanx!

    Read the article

  • Merge two bytes in java/android

    - by Shane
    Hi, I have a frame of 22 bytes. The frame is the input stream from an accelerometer via bluetooth. The acceleromter readings are a 16 bit number split over two bytes. When i try to merge the bytes with buffer[1] + buffer[2], rather than adding the bytes, it just puts the results side by side. so 1+2 = 12. Could someone tell me how to combine these two bytes to obtain the original number. (btw the bytes are sent little endian) Thanks

    Read the article

  • Git: how do you merge with remote repo?

    - by Marco
    Please help me understand how git works. I clone my remote repository on two different machines. I edit the same file on both machines. I successfully commit and push the update from the first machine to the remote repository. I then try to push the update on the second machine, but get an error: ! [rejected] master -> master (non-fast-forward) I understand why I received the error. How can I merge my changes into the remote repo? Do I need to pull the remote repo first?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >