Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 490/813 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • XML Doc to JSP to TIFF

    - by SPD
    We have around 100 word templates, every time user gets a business request he goes into shared folder select the template he/she wants and enter information and save it as tiff, these tiffs later processed by some batch program. I am trying to automate this process. So I defined an XML which has Template information like <Template id="1"> <Section id="1"> <fieldName id="1">Date</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> <Section id="2"> <fieldName id="2">Claim#</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> </Template> Based on the template values I generate the JSP on fly. Now I would like to generate a TIFF file out of it in specified format. I am not sure how to handle this requirement. *edited the original question.

    Read the article

  • Obtaining command line arguments in a QT application

    - by morpheous
    The following snippet is from a little app I wrote using the QT framework. The idea is that the app can be run in batch mode (i.e. called by a script) or can be run interactively. It is important therefore, that I am able to parse command line arguments in order to know which mode in which to run etc. [Edit] I am debugging using QTCreator 1.3.1 on Ubuntu Karmic. The arguments are passed in the normal way (i.e. by adding them via the 'Project' settings in the QTCreator IDE). When I run the app, it appears that the arguments are not being passed to the application. The code below, is a snippet of my main() function. int main(int argc, char *argv[]) { //Q_INIT_RESOURCE(application); try { QApplication the_app(argc, argv); //trying to get the arguments into a list QStringList cmdline_args = QCoreApplication::arguments(); // Code continues ... } catch (const MyCustomException &e) { return 1; } return 0; } [Update] I have identified the problem - for some reason, although argc is correct, the elements of argv are empty strings. I put this little code snippet to print out the argv items - and was horrified to see that they were all empty. for (int i=0; i< argc; i++){ std::string s(argv[i]); //required so I can see the damn variable in the debugger std::cout << s << std::endl; } Does anyone know what on earth is going on (or a hammer)?

    Read the article

  • DefaultStyledDocument.styleChanged(Style style) may not run in a timely manner?

    - by Paul Reiners
    I'm experiencing an intermittent problem with a class that extends javax.swing.text.DefaultStyledDocument. This document is being sent to a printer. Most of the time the formatting of the document looks correct, but once in a while it doesn't. It looks like some of the changes in the formatting have not been applied. I took a look at the DefaultStyledDocument.styleChanged(Style style) code: /** * Called when any of this document's styles have changed. * Subclasses may wish to be intelligent about what gets damaged. * * @param style The Style that has changed. */ protected void styleChanged(Style style) { // Only propagate change updated if have content if (getLength() != 0) { // lazily create a ChangeUpdateRunnable if (updateRunnable == null) { updateRunnable = new ChangeUpdateRunnable(); } // We may get a whole batch of these at once, so only // queue the runnable if it is not already pending synchronized(updateRunnable) { if (!updateRunnable.isPending) { SwingUtilities.invokeLater(updateRunnable); updateRunnable.isPending = true; } } } } /** * When run this creates a change event for the complete document * and fires it. */ class ChangeUpdateRunnable implements Runnable { boolean isPending = false; public void run() { synchronized(this) { isPending = false; } try { writeLock(); DefaultDocumentEvent dde = new DefaultDocumentEvent(0, getLength(), DocumentEvent.EventType.CHANGE); dde.end(); fireChangedUpdate(dde); } finally { writeUnlock(); } } } Does the fact that SwingUtilities.invokeLater(updateRunnable) is called, rather than invokeAndWait(updateRunnable), mean that I can't count on my formatting changes appearing in the document before it is rendered? If that is the case, is there a way to ensure that I don't proceed with rendering until the updates have occurred?

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Preventing a child process (HandbrakeCLI) from causing the parent script to exit

    - by Chris
    I have a batch conversion script to turn .mkvs of various dimensions into ipod/iphone sized .mp4s, cropping/scaling to suit. Determining the original dimensions, required crop, output file is all working fine. However, on successful completion of the first conversion, HandbrakeCLI causes the parent script to exit. Why would this be? And how can I stop it? The code, as it currently stands: #!/bin/bash find . -name "*.mkv" | while read FILE do # What would the output file be? DST=../Touch/$(dirname "$FILE") MKV=$(basename "$FILE") MP4=${MKV%%.mkv}.mp4 # If it already exists, don't overwrite it if [ -e "$DST/$MP4" ] then echo "NOT overwriting $DST/$MP4" else # Stuff to determine dimensions/cropping removed for brevity HandbrakeCLI --preset "iPhone & iPod Touch" --vb 900 --crop $crop -i "$FILE" -o "$DST/$MP4" > /dev/null 2>&1 if [ $? != 0 ] then echo "$FILE had problems" >> errors.log fi fi done I have additionally tried it with a trap, but that didn't change the behaviour (although the last trap did fire) trap "echo Handbrake SIGINT-d" SIGINT trap "echo Handbrake SIGTERM-d" SIGTERM trap "echo Handbrake EXIT-d" EXIT trap "echo Handbrake 0-d" 0

    Read the article

  • What are your Programming Falacies/Myths?

    - by pms1969
    I recently started a new job and as is typical of all jobs, if you've left, you get blamed for everything. Not long after I started there was a change required for an app (web based) that we maintain, and it was quickly pointed out that the actual code for this site had been lost a long time ago, and the only changes we could make to the it were ones that required changes to mark-up [it was a pre-compiled site]. Being new, I needed a little help finding my way around the code, and enlisted the services of one of my colleagues. Made my changes, and then re-enlisted his help to deploy it. While prepping for the deployment (getting the app on the QA server) we discovered that there were actually 2 different, very similarly named, folders in our source repository. It transpired that for the last year or so, mark-up changes had been made to the site directly, and these were the only differences with the code in the slightly incorrectly named folder in source control. So we did have all the code, and can now properly support the site. This put me in mind of a trick we played on a junior programmer once in a previous job, where we told him he couldn't/shouldn't do a certain thing in code as this would likely bring the server to it's knees and cost the company thousands of pounds (a gag that last months :-). And another one in the first programming job I took on - the batch commission run was just going to crash once a month and there was nothing to be done about it, causing a call out, and call out compensation for the on-call guy (a bug I fixed as soon as I became the on-call guy - 2am call outs don't work for me). So I was wondering... What other programming fallacies/myths are out there that are worth sharing?

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • newbie hibernate first level cache confusion

    - by Bruce
    Hi all I'm just geting to grips with hibernate. Little bit confused. I just wanted to watch the operation of the first level cache, which I understood to batch up queries until the end of the session. But if I create an object, hibernate saves it immediately, so that when I later update it in the same transaction, it has to do an update too: Session session = factory.getCurrentSession(); session.beginTransaction(); Test1 test1 = new Test1(); test1.setName("Test 1"); test1.setValue(10); // Touch it session.save(test1); System.out.println("At checkpoint 1"); test1.setValue(20); session.getTransaction().commit(); I see the sql for the save, then 'At checkpoint 1', then the sql for the update. Do I have something set up wrong or am I misunderstanding hibernate's first level cache? Is there a good document on the first level cache - I didn't find anything in the hibernate docs, but I could easily have missed it.. Thanks!

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Hibernate saveOrUpdate fails when I execute it on empty table.

    - by Vladimir
    I'm try to insert or update db record with following code: Category category = new Category(); category.setName('catName'); category.setId(1L); categoryDao.saveOrUpdate(category); When there is a category with id=1 already in database everything works. But if there is no record with id=1 I got following exception: org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1: Here is my Category class setters, getters and constructors ommited for clarity: @Entity public class Category { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; @ManyToOne private Category parent; @OneToMany(fetch = FetchType.LAZY, mappedBy = "parent") private List<Category> categories = new ArrayList<Category>(); } In the console I see this hibernate query: update Category set name=?, parent_id=? where id=? So looks like hibernates tryis to update record instead of inserting new. What am I doing wrong here?

    Read the article

  • Packaging Java apps for the Windows/Linux desktop.

    - by alexmcchessers
    I am writing an application in Java for the desktop using the Eclipse SWT library for GUI rendering. I think SWT helps Java get over the biggest hurdle for acceptance on the desktop: namely providing a Java application with a consistent, responsive interface that looks like that belonging to any other app on your desktop. However, I feel that packaging an application is still an issue. OS X natively provides an easy mechanism for wrapping Java apps in native application bundles, but producing an app for Windows/Linux that doesn't require the user to run an ugly batch file or click on a .jar is still a hassle. Possibly that's not such an issue on Linux, where the user is likely to be a little more tech-savvy, but on Windows I'd like to have a regular .exe for him/her to run. Has anyone had any experience with any of the .exe generation tools for Java that are out there? I've tried JSmooth but had various issues with it. Is there a better solution before I crack out Visual Studio and roll my own? Edit: I should perhaps mention that I am unable to spend a lot of money on a commercial solution.

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • Python class structure ... prep() method?

    - by Adam Nelson
    We have a metaclass, a class, and a child class for an alert system: class AlertMeta(type): """ Metaclass for all alerts Reads attrs and organizes AlertMessageType data """ def __new__(cls, base, name, attrs): new_class = super(AlertMeta, cls).__new__(cls, base, name, attrs) # do stuff to new_class return new_class class BaseAlert(object): """ BaseAlert objects should be instantiated in order to create new AlertItems. Alert objects have classmethods for dequeue (to batch AlertItems) and register (for associated a user to an AlertType and AlertMessageType) If the __init__ function recieves 'dequeue=True' as a kwarg, then all other arguments will be ignored and the Alert will check for messages to send """ __metaclass__ = AlertMeta def __init__(self, **kwargs): dequeue = kwargs.pop('dequeue',None) if kwargs: raise ValueError('Unexpected keyword arguments: %s' % kwargs) if dequeue: self.dequeue() else: # Do Normal init stuff def dequeue(self): """ Pop batched AlertItems """ # Dequeue from a custom queue class CustomAlert(BaseAlert): def __init__(self,**kwargs): # prepare custom init data super(BaseAlert, self).__init__(**kwargs) We would like to be able to make child classes of BaseAlert (CustomAlert) that allow us to run dequeue and to be able to run their own __init__ code. We think there are three ways to do this: Add a prep() method that returns True in the BaseAlert and is called by __init__. Child classes could define their own prep() methods. Make dequeue() a class method - however, alot of what dequeue() does requires non-class methods - so we'd have to make those class methods as well. Create a new class for dealing with the queue. Would this class extend BaseAlert? Is there a standard way of handling this type of situation?

    Read the article

  • Facebook / Offline Permission - Trying to perform an action on a set of offline users.

    - by blueigloo
    Hi there, We're building an app which in part of its functionality tries to capture the number of likes associated to a particular video owned by a user. Users of the app are asked for extended off-line access and we capture the key for each user: The format is like this: 2.hg2QQuYeftuHx1R84J1oGg__.XXXX.1272394800-nnnnnn Each user has their offline / infinite key stored in a table in a DB. The object_id which we're interested in is also stored in the DB. At a later stage (offline) we try to run a batch job which reads the number of likes for each user's video. (See attached code) For some reason however, after the first iteration of the loop - which yields the likes correctly, we get a failure with the oh so familiar message: "Session key is invalid or no longer valid" Any insight would be most appreciated. Thanks, B List<DVideo> videoList = db.SelectVideos(); foreach (DVideo video in videoList) { long userId = 0; ConnectSession fbSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); //session key is attached to the video object for now. fbSession.SessionKey = video.UserSessionKey; fbSession.SessionExpires = false; string fbuid =video.FBUID; long.TryParse(fbuid, out userId); if (userId > 0) { fbSession.UserId = userId; fbSession.Login(); Api fbApi = new Facebook.Rest.Api(fbSession); string xmlQueryResult = fbApi.Fql.Query("SELECT user_id FROM like WHERE object_id = " + video.FBVID); XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(new StringReader(xmlQueryResult)); int likesCount = xmlDoc.GetElementsByTagName("user_id").Count; //Write entry in VideoWallLikes if (likesCount > 0) { db.CountWallLikes(video.ID, likesCount); } fbSession.Logout(); } fbSession = null; }

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • Perl Capture and Modify STDERR before it prints to a file

    - by MicrobicTiger
    I have a perl script which performs multiple external commands and prints the outputs from STDERR and STDOUT to a logfile along with a series of my own print statements to act as documentation on the process. My problem is that the STDERR repeats ~identical prints as example below. I'd like to capture this before it prints and replace with the final result for each of the commands i run. blocks evaluated : 0 blocks evaluated : 10000 blocks evaluated : 20000 blocks evaluated : 30000 ... blocks evaluated : 3420000 blocks evaluated : 3428776 Here's how I'm getting STDOUT and STDERR my $logfile = "Logfile.log"; #log file name #--- Open log file for append if specified --- if ( $logfile ) { open ( OLDOUT, ">&", STDOUT ) or die "ERROR: Can't backup STDOUT location.\n"; close STDOUT; open ( STDOUT, ">", $logfile ) or die "ERROR: Logfile [$logfile] cannot be opened.\n"; } if ( $logfile ) { open ( OLDERR, ">&", STDERR ) or die "ERROR: Can't backup STDERR location.\n"; close STDERR; open ( STDERR, '>&STDOUT' ) or die "ERROR: failed to pass STDERR to STDOUT.\n"; } and closing them close STDERR; open ( STDERR, ">&", OLDERR ) or die "ERROR: Can't fix that first thing you broke!\n"; close STDOUT; open ( STDOUT, ">&", OLDOUT ) or die "ERROR: Can't fix that other thing you broke!\n"; How do I access the STDERR when each print is occurring to do the replace? Or prevent it from printing if it isn't the last of the batch. Many Thanks in advance.

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • Extending the CI_DB_active_record class in codeigniter 2.0

    - by ctrane
    I am writing my first program with Codeigniter, and have run into a problem. I will start with a focused description of the problem and can broaden it if I need to: I need to write a multi-dimensional array to the DB and want to use the insert_batch function from the CI_DB_active_record class to do so. The problem is that I need to write empty values as NULL for some fields while other fields need to be empty strings. The current function wraps all values with single quotes, and I cannot find a way to write null values to the database for specified fields. I would also like to increase the number of records per batch. I see how to extend models, libraries, etc., but is there a way to extend the CI_DB_active_record class without modifying core classes? The minimal amount of core class modification to make this work that I have found is modifying the following lines in the DB.php file (changing the require_once file to the new file that extends the CI_DB_active_record class and changing the CI_DB_active_record class name to the new class name): require_once(BASEPATH.'database/DB_active_rec'.EXT); if ( ! class_exists('CI_DB')) { eval('class CI_DB extends CI_DB_active_record { }'); } Can I do better?

    Read the article

  • C# WPF Unable to control Textboxes

    - by Bo0m3r
    I'm a beginner in coding into C#. While I'm launching a process I can't controls my textboxes. I found some answers on this forum but the explaination is a bit to difficult for me to implement it for my problem. I created a small program that will run a batch file to make a backup. While the backup is running I can't modify my textboxes, disabling buttons etc... I already saw that this is normal but I don't know how to implement the solutions. My last attempt was with Dispatcher.invoke as you can see below. public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); tb_Status.Text = "Ready"; } public void status() { Dispatcher.Invoke(DispatcherPriority.Send, new Action( () => { tb_Status.Text = "The backup is running!"; } ) ); } public void process() { try { Process p = new Process(); p.StartInfo.WindowStyle = ProcessWindowStyle.Minimized; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.FileName = "Robocopy.bat"; p.Start(); string output = p.StandardOutput.ReadToEnd(); p.WaitForExit(); tb_Output.Text = File.ReadAllText("Backup\\log.txt"); } catch (Exception ex) { tb_Status.Text = ex.Message.ToString(); } } private void Bt_Start_Click(object sender, EventArgs e) { status(); Directory.CreateDirectory("Backup"); process(); tb_Status.Text = "The backup finished"; File.Delete("Backup\\log.txt"); } } } Any help is appreciated!

    Read the article

  • Can't push to git hub

    - by John
    I just completed chapter one of the Ruby on Rails Tutorial by Hartl. Posted about one minor hitch previously. Now I started chapter two. I swear I did everything by the book, but now when I try: git push -u origin master I get the following messages after entering my passphrase: ERROR: repository not found fatal: could not read from remote repository Please make sure you have the correct access rights and that the repository exists. When I down loaded heroku tools I think it installed a second version of ruby on my machine. In any case I now have two version listed under All Programs. Could this have screwed thing up? The two versions are Ruby 1.9.2-p290 and 1.9.3-p327. Also when I open the command prompt using 1.9.2 there is a wierd thing at the top before I do anything: 'C:\Program' is not recognized as an internal or external command, operable program or batch file. This is then followed by the normal prompt on the next line. I'm wondering if the use of my public keys have some how gotten screwed up. Any help would be appreciated.

    Read the article

  • Question about <foreach> task and the failonerror attribute?

    - by Mike M
    Hi guys, I have made a build file for the automated compilation of Oracle Forms files. An excerpt of the code is as follows: <target name="build" description="compiles the source code"> ... <foreach item="File" property="filename" failonerror="false" > <in> <items basedir="${source.directory}\${project.type}\Forms"> <include name="*.fmb" /> </items> </in> <do> <exec program="${forms.path}" workingdir="${source.directory}\${project.type}\Forms" commandline="module=${filename} userid=${username}/${password}@${database} batch=yes module_type=form compile_all=yes window_state=minimize" /> </do> </foreach> ... </target> The build file navigates to the directory containing the forms that the user desires fo compile and attempts to compile each form. The failonerror attribute is set to false so that the build file does not exit if a compilation error occurs. Unfortunately, however, though this prevents the build file from exiting when a compilation error occurs, it also appears to make the build file exit the task. This is a problem because, unless the form that does not compile successfully is the last to be tested (based on the filename of the form in alphanumerical decsending order), there will be one or more forms that the build file does not attempt to compile. So, for example, if the folder containing the forms that are desired to be compiled contains 10 forms and the first form does not compile successfully, the build file will not attempt to compile the remaining 9 forms (ie exit the task). Is there a way to make the build file attempt to compile remaining forms after encountering after failing to compile a form? Thanks in advance!

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >