Search Results

Search found 5564 results on 223 pages for 'multi gpu'.

Page 27/223 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • thrust::unique_by_key eating up last element

    - by Programmer
    Please consider the below simple code: thrust::device_vector<int> positions(6); thrust::sequence(positions.begin(), positions.end()); thrust::pair<thrust::device_vector<int>::iterator, thrust::device_vector<int>::iterator > end; //copyListOfNgramCounteachdoc contains: 0,1,1,1,1,3 end.first = copyListOfNgramCounteachdoc.begin(); end.second = positions.begin(); for(int i =0 ; i < numDocs; i++){ end= thrust::unique_by_key(end.first, end.first + 3,end.second); } int length = end.first - copyListOfNgramCounteachdoc.begin() ; cout<<"the value of end -s is: "<<length; for(int i =0 ; i< length ; i++){ cout<<copyListOfNgramCounteachdoc[i]; } I expected the output to be 0,1,1,3 of this code; however, the output is 0,1,1. Can anyone let me know what I am missing? Note: the contents of copyListOfNgramCounteachdoc is 0,1,1,1,1,3 . Also the type of copyListOfNgramCounteachdoc is thrust::device_vector<int>.

    Read the article

  • CUDA threads for inner loop

    - by Manolete
    I've got this kernel __global__ void kernel1(int keep, int include, int width, int* d_Xco, int* d_Xnum, bool* d_Xvalid, float* d_Xblas) { int i, k; i = threadIdx.x + blockIdx.x * blockDim.x; if(i < keep){ for(k = 0; k < include ; k++){ int val = (d_Xblas[i*include + k] >= 1e5); int aux = d_Xnum[i]; d_Xblas[i*include + k] *= (!val); d_Xco[i*width + aux] = k; d_Xnum[i] +=val; d_Xvalid[i*include + k] = (!val); } } } launched with int keep = 9000; int include = 23000; int width = 0.2*include; int threads = 192; int blocks = keep+threads-1/threads; kernel1 <<< blocks,threads >>>( keep, include, width, d_Xco, d_Xnum, d_Xvalid, d_Xblas ); This kernel1 works fine but it is obviously not totally optimized. I thought it would be straight forward to eliminate the inner loop k but for some reason it doesn't work fine. My first idea was: __global__ void kernel2(int keep, int include, int width, int* d_Xco, int* d_Xnum, bool* d_Xvalid, float* d_Xblas) { int i, k; i = threadIdx.x + blockIdx.x * blockDim.x; k = threadIdx.y + blockIdx.y * blockDim.y; if((i < keep) && (k < include) ) { int val = (d_Xblas[i*include + k] >= 1e5); int aux = d_Xnum[i]; d_Xblas[i*include + k] *= (float)(!val); d_Xco[i*width + aux] = k; atomicAdd(&d_Xnum[i], val); d_Xvalid[i*include + k] = (!val); } } launched with a 2D grid: int keep = 9000; int include = 23000; int width = 0.2*include; int th = 32; dim3 threads(th,th); dim3 blocks (keep+threads.x-1/threads.x, include+threads.y-1/threads.y); kernel2 <<< blocks,threads >>>( keep, include, width, d_Xco, d_Xnum, d_Xvalid, d_Xblas ); Although I believe the idea is fine, it does not work and I am running out of ideas here. Could you please help me out here? I also think the problem could be in d_Xco which stores the position k in a smaller array , so the order matters, but I can't think of any other way of doing it...

    Read the article

  • Testing for Adjacent Cells In a Multi-level Grid

    - by Steve
    I'm designing an algorithm to test whether cells on a grid are adjacent or not. The catch is that the cells are not on a flat grid. They are on a multi-level grid such as the one drawn below. Level 1 (Top Level) | - - - - - | | A | B | C | | - - - - - | | D | E | F | | - - - - - | | G | H | I | | - - - - - | Level 2 | -Block A- | -Block B- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | | -Block D- | -Block E- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | . . . . . . This diagram is simplified from my actual need but the concept is the same. There is a top level block with many cells within it (level 1). Each block is further subdivided into many more cells (level 2). Those cells are further subdivided into level 3, 4 and 5 for my project but let's just stick to two levels for this question. I'm receiving inputs for my function in the form of "A8, A9, B7, D3". That's a list of cell Ids where each cell Id has the format (level 1 id)(level 2 id). Let's start by comparing just 2 cells, A8 and A9. That's easy because they are in the same block. private static RelativePosition getRelativePositionInTheSameBlock(String v1, String v2) { RelativePosition relativePosition; if( v1-v2 == -1 ) { relativePosition = RelativePosition.LEFT_OF; } else if (v1-v2 == 1) { relativePosition = RelativePosition.RIGHT_OF; } else if (v1-v2 == -BLOCK_WIDTH) { relativePosition = RelativePosition.TOP_OF; } else if (v1-v2 == BLOCK_WIDTH) { relativePosition = RelativePosition.BOTTOM_OF; } else { relativePosition = RelativePosition.NOT_ADJACENT; } return relativePosition; } An A9 - B7 comparison could be done by checking if A is a multiple of BLOCK_WIDTH and whether B is (A-BLOCK_WIDTH+1). Either that or just check naively if the A/B pair is 3-1, 6-4 or 9-7 for better readability. For B7 - D3, they are not adjacent but D3 is adjacent to A9 so I can do a similar adjacency test as above. So getting away from the little details and focusing on the big picture. Is this really the best way to do it? Keeping in mind the following points: I actually have 5 levels not 2, so I could potentially get a list like "A8A1A, A8A1B, B1A2A, B1A2B". Adding a new cell to compare still requires me to compare all the other cells before it (seems like the best I could do for this step is O(n)) The cells aren't all 3x3 blocks, they're just that way for my example. They could be MxN blocks with different M and N for different levels. In my current implementation above, I have separate functions to check adjacency if the cells are in the same blocks, if they are in separate horizontally adjacent blocks or if they are in separate vertically adjacent blocks. That means I have to know the position of the two blocks at the current level before I call one of those functions for the layer below. Judging by the complexity of having to deal with mulitple functions for different edge cases at different levels and having 5 levels of nested if statements. I'm wondering if another design is more suitable. Perhaps a more recursive solution, use of other data structures, or perhaps map the entire multi-level grid to a single-level grid (my quick calculations gives me about 700,000+ atomic cell ids). Even if I go that route, mapping from multi-level to single level is a non-trivial task in itself.

    Read the article

  • Yii: Multi-language website - best practices.

    - by michal
    Hi, I find Yii great framework, and the example website created with yiic shell is a good point to start... however it doesn't cover the topic of multi-language websites, unfortunately. The docs covers the topic of translating short messages, but not keeping the multi-lingual content ... I'm about to start working on a website which needs to be in at least two languages, and I'm wondering what is the best way to keep content for that ... The problem is that the content is mixed extensively with common elements (like embedded video files). I need to avoid duplicating those commons ... so far I used to have an array of arrays containing texts (usually no more than 1-2 short paragraphs), then the view file was just rendering the text from an array. Now I'd like to avoid keeping it in arrays (which requires some attention when putting double quotations " " and is inconvenient in general...). So, what is the best way to keep those short paragraphs? Should I keep them in DB like (id | msg_id | language | content ) and then select them by msg_id & language? That still requires me to create some msg_id's and embed them into view file ... Is there any recommended paradigm for which Yii has some solutions? Thanks, m.

    Read the article

  • C# XMLRPC Multi Threaded newPost something is not right

    - by user1047463
    I recently decided to get my feet wet with c# multi threading and my progress was good until now. My program uses XML-RPC to create new posts on wordpress. When I run my app on the localhost everything works fine and i successfully able to create new posts on wordpress using a multi threaded approach. However when I try to do the same with a wordpress installed on a remote server , new posts won't appear, so when i go to see all posts, the total number of posts remains unchanged. Another thing to note is the function that creates a new post does return the ID of a supposedly created posts. But as I mentioned previously the new posts are not visible when I try to access them in admin menu. Since I'm new to the multithreading I was hoping some of you guys can help me and check if I'm doing anything wrong. Heres the code that creates threads and calls create post function. foreach (DataRow row in dt.Rows) { Thread t = null; t = new Thread(createPost); lock (t) { t.Start(); } } public void createPost() { string postid; try { icp = (IcreatePost)XmlRpcProxyGen.Create(typeof(IcreatePost)); clientProtocol = (XmlRpcClientProtocol)icp; clientProtocol.Url = xmlRpcUrl; postid = icp.NewPost(1, blogLogin, blogPass, newBlogPost, 1); currentThread--; threadCountLabel.Invoke(new MethodInvoker(delegate { threadCountLabel.Text = currentThread.ToString(); })); } catch (Exception ex) { MessageBox.Show( createPost ERROR ->" + ex.Message); } }

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Django's post_save signal behaves weirdly with models using multi-table inheritance

    - by hekevintran
    Django's post_save signal behaves weirdly with models using multi-table inheritance I am noticing an odd behavior in the way Django's post_save signal works when using a model that has multi-table inheritance. I have these two models: class Animal(models.Model): category = models.CharField(max_length=20) class Dog(Animal): color = models.CharField(max_length=10) I have a post save callback called echo_category: def echo_category(sender, **kwargs): print "category: '%s'" % kwargs['instance'].category post_save.connect(echo_category, sender=Dog) I have this fixture: [ { "pk": 1, "model": "animal.animal", "fields": { "category": "omnivore" } }, { "pk": 1, "model": "animal.dog", "fields": { "color": "brown" } } ] In every part of the program except for in the post_save callback the following is true: from animal.models import Dog Dog.objects.get(pk=1).category == u'omnivore' # True When I run syncdb and the fixture is installed, the echo_category function is run. The output from syncdb is: $ python manage.py syncdb --noinput Installing json fixture 'initial_data' from '~/my_proj/animal/fixtures'. category: '' Installed 2 object(s) from 1 fixture(s) The weird thing here is that the dog object's category attribute is an empty string. Why is it not 'omnivore' like it is everywhere else? As a temporary (hopefully) workaround I reload the object from the database in the post_save callback: def echo_category(sender, **kwargs): instance = kwargs['instance'] instance = sender.objects.get(pk=instance.pk) print "category: '%s'" % instance.category post_save.connect(echo_category, sender=Dog) This works but it is not something I like because I must remember to do it when the model inherits from another model and it must hit the database again. The other weird thing is that I must do instance.pk to get the primary key. The normal 'id' attribute does not work (I cannot use instance.id). I do not know why this is. Maybe this is related to the reason why the category attribute is not doing the right thing?

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • multi-part identifier could not be bound error

    - by vishal Shah
    Here is my query: IF OBJECT_ID('NPWAS1513.dbo.usp_MSPEX_QLK_Billing_Fact_Load') IS NOT NULL DROP PROCEDURE dbo.usp_MSPEX_QLK_Billing_Fact_Load; GO CREATE PROCEDURE usp_MSPEX_QLK_Billing_Fact_Load @create_timestamp datetime, @update_timestamp datetime, @create_user varchar(50), @update_user varchar(50), @dbProdServ varchar(50) AS print 'dbProdServ is:'+ @dbProdServ; print 'current_user is:' +@current_user; DECLARE @sSQL AS VARCHAR(MAX); SET @sSQL = '' SET @sSQL = 'set identity_insert ' + @dbProdServ + '.mspex_qlk_billing_fact ON' EXEC(@sSQL); SET @sSQL = 'INSERT INTO ' + @dbProdServ +'.mspex_qlk_billing_fact (project_id, billing_year, billing_month, billing_month_desc, billing_date_id, projected_bill_amount, billed_year_to_date_amount, billed_inception_to_date_amount, remaining_bill_amount, actual_billed_amount, current_billing_percent, previous_billing_percent, billing_pct_diff, billing_type, final_bill_ind, last_in_progress_date, current_record_ind, load_time_stamp, total_contract_period, contract_period_current_year, partial_bill, create_timestamp, create_user, update_timestamp, update_user) SELECT project_dim.project_id, billing_final_data.billingyear, billing_final_data.billingmonth, billing_final_data.billingmonthdesc, Time_Dim.Time_ID, billing_final_data.projected_bill_amount, billing_final_data.billed_year_todate_amount, billing_final_data.billed_inception_todate_amount, billing_final_data.remaining_bill_amount, billing_final_data.actual_billed_amount, billing_final_data.current_billing_percent, billing_final_data.previous_billing_percent, billing_final_data.billing_pct_diff, billing_final_data.billing_type, billing_final_data.final_bill_ind, billing_final_data.last_in_progress_date, billing_final_data.current_record_ind, billing_final_data.load_time_stamp, billing_final_data.[Total Contract Period], billing_final_data.[Contract Period Current Year], billing_final_data.[Partial Bill],'''+ CAST(@create_timestamp as varchar(max)) + ''',''' + @create_user + ''','''+ CAST(@update_timestamp as varchar(50)) +''','''+ @update_user + ''' FROM '+ @dbProdServ +'.mspex_qlk_project_dim project_dim,'+ @dbProdServ +'.mspex_rpt_billing_final_data billing_final_data,'+ @dbProdServ + '.MSPEX_QLK_Time_Dim Time_Dim WHERE project_dim.myproject_project_uid = billing_final_data.projectuid AND'''+ convert(datetime, cast(billing_final_data.[BillingMonth] as nvarchar(2)) + '''/01/''' + cast(billing_final_data.[billingyear] as nvarchar(4)), 101) +''' + = Time_Dim.Time_Date'; BEGIN TRANSACTION EXEC(@sSQL) COMMIT TRANSACTION I get the error msg: Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "mspex_rpt_billing_final_data.BillingMonth" could not be bound. Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "billing_final_data.billingyear" could not be bound. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 83 Invalid column name 'BillingMonth'. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 84 Invalid column name 'billingyear'. I checked the column names, etc. and things are fine. In fact, I directly dragged the table name and column name to ensure that it is correct. Please help ASAP. Calling me at cell at 630-338-9427 would be great. But an URGENT response is absolutely necessary. Thanks guys.

    Read the article

  • In .NET, What Is Fastest Way to Initialize Multi-Dimensional Array to Non-Default Value

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few tens-of-thousands to several million times. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; }

    Read the article

  • Partially constructed object / Multi threading

    - by reto
    Heya! I'm using joda due to it's good reputation regarding multi threading. It goes great distances to make multi threaded date handling efficient, for example by making all Date/Time/DateTime objects immutable. But here's a situation where I'm not sure if Joda is really doing the right thing. It probably is correct, but I'd be very interested to see the explanation for it. When a toString() of a DateTime is being called Joda does the following: /* org.joda.time.base.AbstractInstant */ public String toString() { return ISODateTimeFormat.dateTime().print(this); } All formatters are thread safe, as they are as well ready-only. But what's about the formatter-factory: private static DateTimeFormatter dt; /* org.joda.time.format.ISODateTimeFormat */ public static DateTimeFormatter dateTime() { if (dt == null) { dt = new DateTimeFormatterBuilder() .append(date()) .append(tTime()) .toFormatter(); } return dt; } This is a common pattern in single threaded applications. I see the following dangers: Race condition during null check -- worst case: two objects get created. No Problem, as this is solely a helper object (unlike a normal singleton pattern situation), one gets saved in dt, the other is lost and will be garbage collected sooner or later. the static variable might point to a partially constructed object before the objec has been finished initialization (before calling me crazy, read about a similar situation in this Wikipedia article. So how does Joda ensure that not partially created formatter gets published in this static variable? Thanks for your explanations! Reto

    Read the article

  • Maven3 Issues with building a multi-module enterprise project

    - by Sujit K
    I just migrated from Maven2 to Maven3 and I'm able to build each module individually or all the modules in one shot by calling mvn clean install. However, in Maven2, since we have multi-module enterprise project, we build multiple ear's and each ear is built as its own module with its own child pom. To build an individual ear with its dependents, the below command works fine in Maven2 but not in Maven3. Let me explain the issue in Maven3 a bit later. mvn -pl ear_module -rf first_dependent_module -am clean install In Maven2 when the reactor lists the build order, I see first_dependent_module second_dependent_module ear_module End of the day I have my ear module also part of the reactor which is how it should be. The reason we call -rf is we don't want to delete the target folder at the main ${project.basedir} (so not to delete the output created in target from building the other ear modules). With Maven3, however, this is all I see when the reactor lists the build order: first_dependent_module second_dependent_module Maven3 totally ignores the argument (ear_module) set to -pl flag to be also built after its dependents have been. Not sure what I'm missing here. Any help/tips would be greatly appreciated. P.S: The build I'm making is similar to the one below.... Build specific module in multi-module project Thanks, SK

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Noise with multi-threaded raytracer

    - by herber88
    This is my first multi-threaded implementation, so it's probably a beginners mistake. The threads handle the rendering of every second row of pixels (so all rendering is handled within each thread). The problem persists if the threads render the upper and lower parts of the screen respectively. Both threads read from the same variables, can this cause any problems? From what I've understood only writing can cause concurrency problems... Can calling the same functions cause any concurrency problems? And again, from what I've understood this shouldn't be a problem... The only time both threads write to the same variable is when saving the calculated pixel color. This is stored in an array, but they never write to the same indices in that array. Can this cause a problem? Multi-threaded rendered image (Spam prevention stops me from posting images directly..) Ps. I use the exactly same implementation in both cases, the ONLY difference is a single vs. two threads created for the rendering.

    Read the article

  • Windows 7 pro remote desktop true multi monitor support patch or hack

    - by Ryan D
    After hours of research the closest i came to any patch that could fix this was a concurrent sessions patch which is not what i want. I have two machines both windows 7 professional. Its not an option to upgrade either to ultimate as the computer being remoted is in another state at our corp office. I need Dual monitor support and would have used the multimon i:1 edit thing however the other tower is not Win 7 ultimate. I have read over and over how it is not supported except with ultimate and enterprise and 2008 server. What have they put into windows 7 ultimate that is not in Win 7 pro and how can i get it or patch Win 7 pro to give it the same functionality. I would have paid for a software patch had i been able to find it anywhere. Summary: I am looking for the ability to remote to a Win 7 pro computer with another Win 7 computer while being able to use Dual monitors. Is there anyone that has the skill or balls to help with this? Most Respectfully Ryan

    Read the article

  • Multi Svn Repositories in Apache

    - by fampinheiro
    I have a set up with apache and subversion In the apache configuration i have <Location /svn> DAV svn SVNParentPath c:/svn </Location> Now i have multiple repositories a a_b a_c a_b_c a_b_d b and i want to map them as a/svn a/b/svn a/c/svn a/b/c/svn a/b/d/svn b/svn to do this without adding directives and restarting apache i tought of making this rules RewriteEngine On RewriteCond $1 !=svn RewriteCond $2 !=svn RewriteRule ^/([^/]+)/(.*?)/svn/(.*)$ /$1_$2/svn/$3 [N] RewriteRule ^/([^/]+)/svn/(.*)$ /svn/$1/$2 [L,PT] this way i rewrite them to /svn/a /svn/a_b /svn/a_c /svn/a_b_c /svn/a_b_d /svn/b The objective is that the client don't have the notion of this happening when a acess is made to a folder without trailing slash the mod dav return a redirect to the folder with the trailing slash exposing my internal url. can i rewrite the outgoing url ?!

    Read the article

  • EC2 hosted service multi-tenant dynamic DNS solution

    - by accidental admin
    I want to change the model of my EC2 hosted service to have a separate sub domain for each tenant (ie. .example.com). My primary DNS is now with dnsmadeeasy.com, but their dynamic DNS offering seem pretty weak: it requires the API to use my full dnsmadeeasy.com account credentials, I rather have the API use a limited privilege credential that can only add/remove/modify subdomain records from what I gather it only allows to modify existing records, does not allow me to dynamically add/remove records for new tenant subdomains My question what are my alternatives? Is there something in the dnsmadeeasy API offering I misunderstood and I should just use them? Is there some other similar DNS service that has a DDNS offering that satisfies my requirements? Or should I just bite the bullet and host my own DNS (my fear is not configuration/learning/know how, my fear is reliability). If you recommend the latter, can you detail the necessary steps or a link to a good tutorial how to?

    Read the article

  • Unlimited and multi-computer online storage solutions with automatic backup

    - by JRL
    As the title says, what are the existing online storage solutions that provide: unlimited storage automatic backup and allow for an unlimited number of computers (use not tied to a single computer)? There are several existing questions on this site related to online storage solutions, but none that is specifically targeted to what I want, so I thought I'd ask the question. This wikipedia article lists some of them, are there others? How do they compare in terms of price, feature set and ease of use? Update: Kinda disappointed no one has any answers to this so far. JungleDisk looks promising, anyone have experience with it? Update 2: To answer the comments, what I'm looking for definitely DOES exist. These solutions all seem to fit the bill: BackMii CrashPlan DataPreserve Humyo JungleDisk KeepVault SpiderOak And some of them are quite cheap (CrashPlan is $100 a year). For unlimited space and computers, I'd say that's pretty good. Does anyone have experience with CrashPlan or any other of the above solutions?

    Read the article

  • setting up synergy (multi-computer shared input devices) new to mac

    - by pedalpete
    I've just gotten a mac and the first thing I'm trying to do in setting it up for dev is to set-up synergy. I've run into countless issues, and am making progress, but that 'mac's just work' theory is getting tired really fast. I'd prefer running the mac as the server, as it's a desktop. why use my pc(laptop) as the server. So i downloaded synergy, and tried to edit the 'synergy.conf' file. First with textmate, then with apple script. Neither of these applications will open the file. Even after changing textmate to use plain text format and switching off a few other things to make it into a plain text editor. No good. Uh, isn't that what these script/text editors are for? Then I found SyneryKM which is a package to run synergy on osx. Took LOTS of fiddling, but I think I finally kinda got it figured out. I've got my pc & mac both running synergy. However, my pc will only connect to synergy using the ip address. No problem, however, my mac won't connect to itself as a synergy server using the IP address. If I use the ip address as a screen in the 'server configuration file', i get a 'Error: unknown screen name mymac'. I know this shouldn't be super complicated, and possibly not the place to find answers about synergy, but I couldn't find a better place, and there has been some talk of synergy here. Any chance somebody can help with this?

    Read the article

  • Best solution for Multi-WAN failover (inside & out)?

    - by Sean O
    Looking for a way to setup 2 ISPs in failover mode, for both incoming & outgoing traffic, for our small (<100 devices) network. The leading contender for now seems to be the Peplink Balance 310. However, a reseller I spoke with said it's great for 100% outgoing connectivity, but didn't seem to be confident in its abilities to handle incoming traffic. This is important as we host our own web site, Exchange e-mail, and virtual desktops (RDP). Do any Peplink owners use this for failover of incoming traffic? Are there other devices I should be considering? We're currently using a Cisco 1800 series router & ASA 5500 series firewall, with Comcast & T-1 lines (the goal being to replace the T with DSL/FiOS {whenever that becomes availble}). Price range: ~$1000 - $2500 USD. Thanks.

    Read the article

  • Multi-Boot through IPMI possible?

    - by yoursort
    Hi, I want to setup a cluster. On each machine Windows XP and Linux should be installed and the OS should be selectable by a boat loader (e.g. grub). All machines have IPMI cards. Is it possible when starting the machines over IPMI to also select the OS to boot? And how? Thanks!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >