Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 256/307 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Creating a global variable on the fly. [PHP ENCRYPTION]

    - by stormdrain
    Is there a way to dynamically create constant variables on the fly? The idea is that upon logging into the system, a user would be asked to upload a small text file that would be fread, and assigned to a var that would be accessible throughout the system. If this is possible, just to be clear, would this variable then only be accessible to that user and only while the session is alive? Security being the main concern here, would it be more practical to store the var in a session variable? The plan: Data in the db will be encrypted via mcrypt, and the key will be stored on USB thumbdrives. The user will insert the thumbdrive when going to access the system. Upon logging in, the app will prompt the user to upload the key. They will navigate to the thumbdrive and key. Via fopen and fread, the key will be assigned to a global var which will then allow access to encrypted data, and will be used to encrypt new info being entered to the db. When the user logs out, or session times out, the global var will become empty. Thanks!

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • CCNet exception during build of vs2010 project

    - by sonee
    We have two build machines. Lately, we've migrated our projects to vs2010 from vs2005. But the problem is that one of the machines occurs error during build. Another machine works well, but just one machine shows error. The differences between the machines are os and computer spec. The machine which is working well is installed windows server 2003 and the other is windows7. the error message is unhandled exception: System.NullReferenceException: Microsoft.VisualStudio.Shell.ThreadHelper.InvokeOnUIThread(InvokableBase invokable) Microsoft.VisualStudio.Shell.ThreadHelper.Invoke(Action action)Microsoft.VisualStudio.Project.VS.Implementation.VSShellServices.InvokeOnUIThread(Action method) Microsoft.VisualStudio.Project.VisualC.VCProjectEngine.ApartmentMarshaler.Invoke(Action method) Microsoft.VisualStudio.Project.VisualC.VCProjectEngine.VCConfigBuildJob.BuildCompleted(BuildSubmission ar) Microsoft.VisualStudio.Project.Contracts.Implementation.BuildProjectBase.BuildCompletedCallbackManager.BuildCompleted(BuildSubmission buildSubmission) Microsoft.Build.Execution.BuildSubmission.<CheckForCompletion>b__0(Object state) System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() System.Threading.ThreadPoolWorkQueue.Dispatch() System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() Curiously enough, when I run building project in command line on the machine which occurs error, it works well. The machine just shows error when launched by ccnet. I've installed latest version of ccnet to all machines. Is there anybody who faced like this problem?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • "Special case" records for foreign key constraints

    - by keithjgrant
    Let's say I have a mysql table, called foo with a foreign key option_id constrained to the option table. When I create a foo record, the user may or may not have selected an option, and 'no option' is a viable selection. What is the best way to differentiate between 'null' (i.e. the user hasn't made a selection yet) and 'no option' (i.e. the user selected 'no option')? Right now, my plan is to insert a special record into the option table. Let's say that winds up with an id of 227 (this table already has a number of records at this point, so '1' isn't available). I have no need to access this record at a database level, and it would act as nothing more than a placeholder that the foreign key in the foo table can reference. So do I just hard-code '227' in my codebase when I'm creating 'foo' records where the user has selected 'no option'? The hard-coded id seems sloppy, and leaves room for error as the code is maintained down the road, but I'm not really sure of another approach.

    Read the article

  • IPad SQLite Push and Pull Data from external MS SQL Server DB

    - by MattyD
    This carries on from my previous post (http://stackoverflow.com/questions/4182664/ipad-app-pull-and-push-relational-data). My plan is that when the ipad application starts I am going to pull data (config data i.e. Departments, Types etc etc relational data that is used across the system) from a webhosted MS SQL Server DB via a webservice and populate it into an SQL Lite DB on the IPad. Then when I load a listing I will pull the data over the line again via a webservice and populate it into the SQL Lite db on the ipad (than just run select commands to populate the listing). My questions are: 1. What is the most efficient way to transfer data across the line via the web? Everyone seems to do it a different way. My idea is that I will have a webService for each type of data pull (e.g. RetrieveContactListing) that will query the db and than convert that data into "something" to send across the line. My question really is what is the "something" that it should be converting into? 2. Everyone talks about odata services. Is this suited for applications where complex read and writes are needed? Ive created a simple iphone app before that talked to an sql server db (i just sent my own structured xml across the line) but now with this app the data calls are going to be a lot larger so efficiency is key.

    Read the article

  • sybase - values from one table that aren't on another, on opposite ends of a 3-table join

    - by Lazy Bob
    Hypothetical situation: I work for a custom sign-making company, and some of our clients have submitted more sign designs than they're currently using. I want to know what signs have never been used. 3 tables involved: table A - signs for a company sign_pk(unique) | company_pk | sign_description 1 --------------------1 ---------------- small 2 --------------------1 ---------------- large 3 --------------------2 ---------------- medium 4 --------------------2 ---------------- jumbo 5 --------------------3 ---------------- banner table B - company locations company_pk | company_location(unique) 1 ------|------ 987 1 ------|------ 876 2 ------|------ 456 2 ------|------ 123 table C - signs at locations (it's a bit of a stretch, but each row can have 2 signs, and it's a one to many relationship from company location to signs at locations) company_location | front_sign | back_sign 987 ------------ 1 ------------ 2 987 ------------ 2 ------------ 1 876 ------------ 2 ------------ 1 456 ------------ 3 ------------ 4 123 ------------ 4 ------------ 3 So, a.company_pk = b.company_pk and b.company_location = c.company_location. What I want to try and find is how to query and get back that sign_pk 5 isn't at any location. Querying each sign_pk against all of the front_sign and back_sign values is a little impractical, since all the tables have millions of rows. Table a is indexed on sign_pk and company_pk, table b on both fields, and table c only on company locations. The way I'm trying to write it is along the lines of "each sign belongs to a company, so find the signs that are not the front or back sign at any of the locations that belong to the company tied to that sign." My original plan was: Select a.sign_pk from a, b, c where a.company_pk = b.company_pk and b.company_location = c.company_location and a.sign_pk *= c.front_sign group by a.sign_pk having count(c.front_sign) = 0 just to do the front sign, and then repeat for the back, but that won't run because c is an inner member of an outer join, and also in an inner join. This whole thing is fairly convoluted, but if anyone can make sense of it, I'll be your best friend.

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Providing custom database functionality to custom asp.net membership provider

    - by IrfanRaza
    Hello friends, I am creating custom membership provider for my asp.net application. I have also created a separate class "DBConnect" that provides database functionality such as Executing SQL statement, Executing SPs, Executing SPs or Query and returning SqlDataReader and so on... I have created instance of DBConnect class within Session_Start of Global.asax and stored to a session. Later using a static class I am providing the database functionality throughout the application using the same single session. In short I am providing a single point for all database operations from any asp.net page. I know that i can write my own code to connect/disconnect database and execute SPs within from the methods i need to override. Please look at the code below - public class SGI_MembershipProvider : MembershipProvider { ...... public override bool ChangePassword(string username, string oldPassword, string newPassword) { if (!ValidateUser(username, oldPassword)) return false; ValidatePasswordEventArgs args = new ValidatePasswordEventArgs(username, newPassword, true); OnValidatingPassword(args); if (args.Cancel) { if (args.FailureInformation != null) { throw args.FailureInformation; } else { throw new Exception("Change password canceled due to new password validation failure."); } } ..... //Database connectivity and code execution to change password. } .... } MY PROBLEM - Now what i need is to execute the database part within all these overriden methods from the same database point as described on the top. That is i have to pass the instance of DBConnect existing in the session to this class, so that i can access the methods. Could anyone provide solution on this. There might be some better techniques i am not aware of that. The approach i am using might be wrong. Your suggessions are always welcome. Thanks for sharing your valuable time.

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • How to see external libraries code when debugging

    - by Sanva
    Hello!! First of all... this is my message #1 in this place, so... please be nice with me ;) I just started recently to study Gnome apps/libraries and I found that debuggers are an excellent way to learn, because seeing the code running helps a lot in understanding the structure of the program. But I have a problem. For example, debugging gnome-panel I found a lot of calls to external functions (basically the GTK+ functions), and although pretending to see all the code of all the functions applications like this call would be crazy, there are a lot that will be very interesting to see in action. The problem is that the debugger hasn't the code of those libraries loaded and it can't show it to me —at most it shows the line number where the execution is. I'm using Nemiver and when it tries to enter in an external function it claims because it can't find a file it supposed to be somewhere. For example, trying to enter in gtk_window_set_default_icon_name it tries to load /build/buildd/gtk+2.0-2.16.1/gtk/gtkwindow.c, and calling XSetIOErrorHandler, ../../src/ErrHndlr.c. So now I think that I'm doing something wrong... Why Nevimer are looking for those source files in those places?? My system does not even have the /build/buildd/ folders... and I don't know if I'm doing something wrong or I need to install somethig or what. Any suggestion? How do you debug this kind of applications? Best regards and thanks a lot for your time —and forgive me if my English is bad.

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

  • multiple clients - one server connection with sockets tcp/ip c# .net

    - by jagse
    Hello guys, I need to develop a client server system where I can have multiple clients communicating with one server at the same time. I want to communicate xml serialized objects and also need to send and receive other commands to invoke methods. Now, I am just starting with socket programming in C# and .Net and found that the asynchronous I/O is the way to go so that the methods dont block the execution of code. Also there are many examples of how to make a simple client server system. So I have a basic understanding of how that works. Anyway, what still is not clear to me is how I can set up a server which can manage connections to multiple clients? Can I just create a new socket per connection and then store those in some kind of list? Do I need some kind of multiplexing to achieve this? Do I have to listen at multiple ports? What`s the best way here? And the other thing is if I need to develop my own protocol to differentiate between what I am actually sending over the network -- xml serialized object or a command which might be just a string encoded in ascII or something. Or would I develop my own protocol just to send these commands? Any kind of help is apreciated! If someone knows a good book which covers this sort of stuff, let me know. Cheers I forgot to mention that some of my clients which are supposed to communicate with my server will be pda and I therefore use the compact framework... So this might bring in some restrictions...

    Read the article

  • ASP.NET Web Optimization - confusion about loading order

    - by Ciel
    Using the ASP.NET Web Optimization Framework, I am attempting to load some javascript files up. It works fine, except I am running into a peculiar situation with either the loading order, the loading speed, or its execution. I cannot figure out which. Basically, I am using ace code editor for javascript, and I also want to include its autocompletion package. This requires two files. /ace.js /ext-language_tools.js This isn't an issue, if I load both of these files the normal way (with <script> tags) it works fine. But when I try to use the web optimization bundles, it seems as if something goes wrong. Trying this out... bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") .Include("~/js/ext-language_tools.js") }); and then in the view .. @Scripts.Render("~/bundles/js") I get the error ace is not defined This means that the ace.js file hasn't run, or hasn't loaded. Because if I break it apart into two bundles, it starts working. bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") }); bundles.Add(new ScriptBundle("~/bundles/js/language_tools") { .Include("~/js/ext-language_tools.js") }); Can anyone explain why this would behave in this fashion?

    Read the article

  • Personalized UIView created with Interface Builder

    - by Malox
    I need to project a personalized UIView with a UIImageView and 3 UILabel. I need to allocate more of this view because I want put it into a UIScrollView. I would avoid to generate the view programatically because it's difficult and boring design it. My idea is to create a new class that extends UIView and design it with interface builder. For example my Personalized View code is like that: #import <UIKit/UIKit.h> @interface PersonalizedPreview : UIView { IBOutlet UIImageView *image; IBOutlet UILabel *first_label; IBOutlet UILabel *second_label; IBOutlet UILabel *third_label; } -(void) setImage:(UIImage *)image; @property (nonatomic, retain) IBOutlet UIImageView *image; @property (nonatomic, retain) IBOutlet UILabel *label; .... @end I would create an associated xib file for this view and initialize it simply specifing the xib file. Note that I don't want create a specific ViewController for this view and PersonalizedView is instantiate at runtime not when the app runs, moreover I don't know how many PersonalizedView I will instantiate, it depends on runtime execution. Anyone can help me? Thank you very much.

    Read the article

  • using the :sass filter with :css

    - by corroded
    We are currently making a widget that requires some default declared styles along with it(widget html is included by javasacript, along with the default css in style tags) but the problem is i can't "chain" haml filters. What I'm trying to do is to add an internal stylesheet along with the widget like so: <style type="text/css"> p {color: #f00;} </style> <div id="widget-goes-here"> <p>etc</p> </div> We are using haml so I tried doing it with the sass filter: :sass p :color #f00 #widget-goes-here %p etc sadly, it just generated a div with a p plus the generated css code literally on top: p {color: #f00;} paragraph here I then tried using the :css filter of haml to enclose the thing in style tags(theoretically it should then turn the paragraph text color to red): :css :sass p :color #f00 #widget-goes-here %p etc But this also failed, it did generated style tags but then it just enclosed the words :sass p :color #f00 in it(it didn't parse the sass code) We did change it to :css p {color: #f00} and it worked out fine, but I still plan on doing the styling in sass(instead of plain old css) is there a way to do this?

    Read the article

  • Using LINQ to map dynamically (or construct projections)

    - by CodeGrue
    I know I can map two object types with LINQ using a projection as so: var destModel = from m in sourceModel select new DestModelType {A = m.A, C = m.C, E = m.E} where class SourceModelType { string A {get; set;} string B {get; set;} string C {get; set;} string D {get; set;} string E {get; set;} } class DestModelType { string A {get; set;} string C {get; set;} string E {get; set;} } But what if I want to make something like a generic to do this, where I don't know specifically the two types I am dealing with. So it would walk the "Dest" type and match with the matching "Source" types.. is this possible? Also, to achieve deferred execution, I would want it just to return an IQueryable. For example: public IQueryable<TDest> ProjectionMap<TSource, TDest>(IQueryable<TSource> sourceModel) { // dynamically build the LINQ projection based on the properties in TDest // return the IQueryable containing the constructed projection } I know this is challenging, but I hope not impossible, because it will save me a bunch of explicit mapping work between models and viewmodels.

    Read the article

  • SqlCeResultSet Problem

    - by Vlad
    Hello, I have a SmartDevice project (.NetCF 2.0) configured to be tested on the USA Windows Mobile 5.0 Pocket PC R2 Emulator. My project uses SqlCe 3.0. Understanding that a SmartDevice project is "more carefull" with the device's memory I am using SqlCeResultSets. The result sets are strongly typed, autogenerated by Visual Studio 2008 using the custom tool MSResultSetGenerator. The problem I am facing is that the result set does not recognize any column names. The autogenerated code for the fields does not work. In the client code I am using InfoResultSet rs = new InfoResultSet(); rs.Open(); rs.ReadFirst(); string myFormattedDate = rs.MyDateColumn.ToString("dd/MM/yyyy"); When the execution on the emulator reaches the rs.MyDateColumn the application throws an System.IndexOutOfRangeException. Investigating the stack trace at System.Data.SqlServerCe.FieldNameLookup.GetOrdinal() at System.Data.SqlServerCe.SqlCeDataReader.GetOrdinal() I've tested the GetOrdinal method (in my autogenerated class that inherits SqlCeResultSet): this.GetOrdinal("MyDateColumn"); // throws an exception this.GetName(1); // returns "MyDateColumn" this.GetOrdinal(this.GetName(1)); //throws an exception :) [edit added] The table exists and it's filled with data. Using typed DataSets works like a charm. Regenerating the SqlCeResultSet does not solve the issue, the problem remains. The problem basically is that I am not able to access a column by it's name. The data can be accessed trough this.GetDateTime(1)using the column ordinal. The application fails executing this.GetOrdinal("MyDateColumn"). Also I have updated Visual Studio 2008 to Service Pack 1. Additionaly I am developing the project on a virtual machine with Windows XP SP 2, but in my opinion if the medium is virtual or not should have no effect on the developing. Am I doing something wrong or am I missing something? Thank you.

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • Thread-safty of boost RNG

    - by Maciej Piechotka
    I have a loop which should be nicely pararellized by insering one openmp pragma: boost::normal_distribution<double> ddist(0, pow(retention, i - 1)); boost::variate_generator<gen &, BOOST_TYPEOF(ddist)> dgen(rng, ddist); // Diamond const std::uint_fast32_t dno = 1 << i - 1; // #pragma omp parallel for for (std::uint_fast32_t x = 0; x < dno; x++) for (std::uint_fast32_t y = 0; y < dno; y++) { const std::uint_fast32_t diff = size/dno; const std::uint_fast32_t x1 = x*diff, x2 = (x + 1)*diff; const std::uint_fast32_t y1 = y*diff, y2 = (y + 1)*diff; double avg = (arr[x1][y1] + arr[x1][y2] + arr[x2][y1] + arr[x2][y2])/4; arr[(x1 + x2)/2][(y1 + y2)/2] = avg + dgen(); } (unless I make an error each execution does not depend on others at all. Sorry that not all of code is inserted). However my question is - are boost RNG thread-safe? They seems to refer to gcc code for gcc so even if gcc code is thread-safe it may not be the case for other platforms.

    Read the article

  • use Ghostscript to convert pcl to postscript

    - by Bryon
    So I want to use Ghostscript to convert files that are created in pcl format to postscript. That's the gist of my problem. I am simply trying to run it on the command line, but in the final stage it will have to be run on a lp command like lp -d < gs something something GPL Ghostscript 9.00 (2010-09-14) I will be running this on a solaris 10 server but I believe any unix system should work similar. bash-3.00# /usr/local/bin/gs -sDEVICE=pswrite -dLanguageLevel=1 -dNOPAUSE -dBATCH -dSAFER -sOutputFile=output.ps cms-form.pcl GPL Ghostscript 9.00 (2010-09-14) Copyright (C) 2010 Artifex Software, Inc. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Error: /undefined in &k2G-210z100u0l6d0e63fa0V Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1910 1 3 %oparray_pop 1909 1 3 %oparray_pop 1893 1 3 %oparray_pop 1787 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1154/1684(ro)(G)-- --dict:0/20(G)-- --dict:77/200(L)-- Current allocation mode is local Current file position is 30 GPL Ghostscript 9.00: Unrecoverable error, exit code 1

    Read the article

  • Which web framework or technologies would suit me?

    - by Suraj Chandran
    Hi, I had been working on desktop apps and server side(non web) for some time and now I am diving in to web first time. I plan to write a scalable enterprise level app. I have worked with Java, Javascript, Jquery etc. but I absolutely hate jsp. So is there any framework that focuses on developing enterprise level web apps without jsp. I liked Wicket's approach, but I think there is a little lack of support of dynamic html in it and jquery(yes i looked at wiquery). Also I feel making wicket apps scalable would take some sweat. Can Spring MVC, Struts2 etc. help me make with this with just using say Java, JavaScript, and JQuery. Or are there any other options for me like Wicket. Please do forgive if anything above looks insane, I am still working on my understanding with enterprise web apps. NOTE: If you think that I should take a different direction or approach, please do suggest!

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >