Search Results

Search found 61779 results on 2472 pages for 'data handling'.

Page 61/2472 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • Handling array passed to object at creation

    - by cecilli0n
    When creating my object I pass it an array of a row from my database. (everything in the array we will need, disregarding unnecessary elements at sql query level) When I need to access certain array elements from within my class, I do so like $this->row['element'] However, As I continue development, I sometimes forget what exactly is in this passed array.(this itself doesn't seem good) I am wondering if their is a professional approach to dealing with this, Or am I the only one who has these "I wonder whats in the array" thoughts. One approach to tackling this could be that when we originally pass the array, in the constructor, we assign each element of the array to its own variable, but is this considered professional practice? Additionally by doing this, we could make those variables constants, in a attempt at immutability. Overall I am trying to adhere to good software craftsmanship. Regards.

    Read the article

  • Is possible to write too many asserts?

    - by Lex Fridman
    I am a big fan of writing assert checks in C++ code as a way to catch cases during development that cannot possibly happen but do happen because of logic bugs in my program. This is a good practice in general. However, I've noticed that some functions I write (which are part of a complex class) have 5+ asserts which feels like it could potentially be a bad programming practice, in terms of readability and maintainability. I think it's still great, as each one requires me to think about pre- and post-conditions of functions and they really do help catch bugs. However, I just wanted to put this out there to ask if there is a better paradigms for catching logic errors in cases when a large number of checks is necessary.

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

  • 3D physics engine for accurate collision handling on desktop/laptop computers (non-console)

    - by Georges Oates Larsen
    What are your suggestions for a physics engine that satisfies the following criteria? Capable of calculating collisions between multiple concave mesh-based colliders Handles many collisions going on at once (for instance one mesh being wedged between two others, which themselves may be wedged between two meshes) Does not allow for collider passthrough, even at high speeds. For instance, if I am applying force to a programmatically hinged object that makes it spin, I do not want it to pass through another rigidbody that it collides with while spinning. I have this problem using PhysX As implied before, reacts well to hinged objects, preferably has its own implementation of a hinge, but I am willing to program my own. The important part is that it has some sort of interface that guarantees accurate collision tracking even when dealing with these things Platform independent -- runs on mac as well as PC, also not tied down to specific graphics cards I think that's the best way to explain what I am looking for. Basically, I need SUPER reliable collisions. Something that can't be accomplished with a simple ray casting approach that sends a ray from the last position of the object to the current position (as this object may be potentially large and colliding with small objects via rotation) Bonus points for also including an OPEN SOURCE engine.

    Read the article

  • Handling deleted users - separate or same table?

    - by Alan Beats
    The scenario is that I've got an expanding set of users, and as time goes by, users will cancel their accounts which we currently mark as 'deleted' (with a flag) in the same table. If users with the same email address (that's how users log in) wish to create a new account, they can signup again, but a NEW account is created. (We have unique ids for every account, so email addresses can be duplicated amongst live and deleted ones). What I've noticed is that all across our system, in the normal course of things we constantly query the users table checking the user is not deleted, whereas what I'm thinking is that we dont need to do that at all...! [Clarification1: by 'constantly querying', I meant that we have queries which are like: '... FROM users WHERE isdeleted="0" AND ...'. For example, we may need to fetch all users registered for all meetings on a particular date, so in THAT query, we also have FROM users WHERE isdeleted="0" - does this make my point clearer?] (1) continue keeping deleted users in the 'main' users table (2) keep deleted users in a separate table (mostly required for historical book-keeping) What are the pros and cons of either approach?

    Read the article

  • Efficient way to check for changes to the contents of folders

    - by MrVimes
    I am creating an application that maintains a database of files of a certain type in a given folder (and all subfolders) Initially the program will recurse the folders and add any file it finds of that type to the database. I want the application to have the ability to re-scan the folder and add any files that were not there the last time the folders were scanned. It can't use the date created property of the file because there is a high chance of a file being added to the folders that isn't a new file. I am wondering what the most efficient way of doing this is, and if there is a way that doesn't involve checking each file is in the database already (which, if there are 5000 files would mean 5000 queries of a list 5000 items in size, or 25 million 'checks' for the sql engine to perform) I suppose a more specific question to acheive the same goal would be - is there a property of a file (in Microsoft Windows) that will reliably tell you when that file arrived in that folder.

    Read the article

  • Architecture - 32-bit handling 64-bit instructions

    - by tkoomzaaskz
    tomasz@tomasz-lenovo-ideapad-Y530:~$ lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 6 CPU MHz: 2000.000 BogoMIPS: 4000.12 Cache L1d: 32K Cache L1i: 32K Cache L2: 3072K I can see that my architecture is 32-bit (i686). But CPU op-mode(s) are 32-bit and 64-bit. The question is: how come? How is it handled that a 32-bit processor performs 64-bit operations? I guess it's a lot slower than native 32-bit operations. Is it built-in processor functionality (to emulate being 64-bit) or is it software dependent? When does it make sense for a 32-bit processor to run 64-bit operations?

    Read the article

  • InputLayout handling

    - by Kikaimaru
    Where are you supposed to store InputLayout? Suppose i have some basic structure like. class Mesh { List<MeshPart> MeshParts } class MeshPart { Effect Effect; VertexBufferBinding VertexBuffer; ... } Where should I store input layout? It's a connection between vertex buffer and specific pass, i can live with just 1 pass but I still have diffferent techniques so i need at least an array with some connection to effecttechniques, but I would appriciate something not crazy like dictionary. I could also create wrapper for Effect and EffectTechnique, but there must be some normal solution.

    Read the article

  • Proper password handling for login

    - by piers
    I have read a lot about PHP login security recently, but many questions on Stack Overflow regarding security are outdated. I understand bcrypt is one of the best ways of hashing passwords today. However, for my site, I believe sha512 will do very well, at least to begin with. (I mean bcrypt is for bigger sites, sites that require high security, right?) I´m also wonder about salting. Is it necessary for every password to have its own unique salt? Should I have one field for the salt and one for the password in my database table? What would be a decent salt today? Should I join the username together with the password and add a random word/letter/special character combination to it? Thanks for your help!

    Read the article

  • How can i install .deb files from terminal , i tried different ways in askubuntu

    - by krishnamraju
    i am trying to install teamviewer and i am getting the following error and the entire process is as follows kittu@kittu-355V4C-356V4C-3445VC-3545VC:~$ sudo dpkg -i teamviewer_linux_x64.debdpkg: error processing archive teamviewer_linux_x64.deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: teamviewer_linux_x64.deb the file is in home/downloads/teamviewer_linux_x64.deb

    Read the article

  • Packing up files on my machine, sending it to a server, and unpacking it

    - by MxyL
    I am implementing a feature in my application that sends all files in a specified folder to a server. I have the basic FTP transaction set up using Apache Commons FTPClient: it sets up a connection and transfers a file from one place to another. So I can simply loop over the directory and use this connection to transfer all the files. However, this could be better. Rather than transferring each file one by one, it makes more sense to pack it up in a compressed archive and then send the whole file at once. Saves time and bandwidth, since these are just text files so they compress nicely. So I would like to add automatic archive packing and unpacking. This is the workflow I have planned out, using zip compression: Zip all files in the folder Send the file over Unzip the files at its destination 1 and 2 are easy since the files are on the local machine, but I'm not sure how to accomplish the last step, when the files are now on a remote server. What are my options? I have control over what I can put and run on the server. Perhaps it is not necessary to do the packing/unpacking myself?

    Read the article

  • Access Control Service: Handling Errors

    - by Your DisplayName here!
    Another common problem with external authentication is how to deal with sign in errors. In active federation like WS-Trust there are well defined SOAP faults to communicate problem to a client. But with web applications, the error information is typically generated and displayed on the external sign in page. The relying party does not know about the error, nor can it help the user in any way. The Access Control Service allows to post sign in errors to a specified page. You setup this page in the relying party registration. That means that whenever an error occurs in ACS, the error information gets packaged up as a JSON string and posted to the page specified. This way you get structued error information back into you application so you can display a friendlier error message or log the error. I added error page support to my ACS2 sample, which can be downloaded here. How to turn the JSON error into CLR types The JSON schema is reasonably simple, the following class turns the JSON into an object: [DataContract] public class AcsErrorResponse {     [DataMember(Name = "context", Order = 1)]     public string Context { get; set; }     [DataMember(Name = "httpReturnCode", Order = 2)]     public string HttpReturnCode { get; set; }     [DataMember(Name = "identityProvider", Order = 3)]        public string IdentityProvider { get; set; }     [DataMember(Name = "timeStamp", Order = 4)]     public string TimeStamp { get; set; }     [DataMember(Name = "traceId", Order = 5)]     public string TraceId { get; set; }     [DataMember(Name = "errors", Order = 6)]     public List<AcsError> Errors { get; set; }     public static AcsErrorResponse Read(string json)     {         var serializer = new DataContractJsonSerializer( typeof(AcsErrorResponse));         var response = serializer.ReadObject( new MemoryStream(Encoding.Default.GetBytes(json))) as AcsErrorResponse;         if (response != null)         {             return response;         }         else         {             throw new ArgumentException("json");         }     } } [DataContract] public class AcsError {     [DataMember(Name = "errorCode", Order = 1)]     public string Code { get; set; }             [DataMember(Name = "errorMessage", Order = 2)]     public string Message { get; set; } } Retrieving the error information You then need to provide a page that takes the POST and deserializes the information. My sample simply fills a view that shows all information. But that’s for diagnostic/sample purposes only. You shouldn’t show the real errors to your end users. public class SignInErrorController : Controller {     [HttpPost]     public ActionResult Index()     {         var errorDetails = Request.Form["ErrorDetails"];         var response = AcsErrorResponse.Read(errorDetails);         return View("SignInError", response);     } } Also keep in mind that the error page is an anonymous page and that you are taking external input. So all the usual input validation applies.

    Read the article

  • How to deal with checked exceptions that cannot ever be thrown

    - by ammoQ
    Example: foobar = new InputStreamReader(p.getInputStream(), "ISO-8859-1"); Since the encoding is hardcoded and correct, the constructor will never throw the UnsupportedEncodingException declared in the specification (unless the java implementation is broken, in which case I'm lost anyway). Anyway, Java forces me to deal with that exception anyway. Currently, it looks like that try { foobar = new InputStreamReader(p.getInputStream(), "ISO-8859-1"); } catch(UnsupportedEncodingException e) { /* won't ever happen */ } Any ideas how to make it better?

    Read the article

  • handling the holding of money on a platform

    - by user1716672
    We are building a platform for a client, developed in Yii, where users can top up their account on the platform with money from paypal. Users can upload files and buy access to each others files. User can also gift other users with money. I was thinking that when users top up their account, the money goes rom their Paypal to the merchant account of the website. So all users' money goes to one merchant account. Then, any transactions on the platform are simply recorded on the platform and each users' balance is the maximum amount they can withdraw from the merchant account. Is this the right approach? Legally, are there any problems?

    Read the article

  • Visit our Consolidated List of Mandatory Project Costing Code and Data Fixes

    - by SherryG-Oracle
    Projects Support has a published document with a consolidated listing of mandatory code and data fixes for Project Costing.  Generic Data Fix (GDF) patches are created by development to fix data issues caused by bugs/issues in the application code.  The GDF patches are released for download via My Oracle Support which are then referenced in My Oracle Support documents and by support to provide data fixes for known code fix issues.Consolidated root cause code fix and generic data fix patches will be superceded whenever any new version is created.  These patches fix a number of critical code and data issues identified in the Project Costing flow.This document contains a consolidated list of code and data fixes for Project Costing.  The note lists the following details: Note ID Component Type (code or data) Abstract Patch Visit DocID 1538822.1 today!

    Read the article

  • Data Structures: What are some common examples of problems where "buffers" come into action?

    - by Dark Templar
    I was just wondering if there were some "standard" examples that everyone uses as a basis for explaining the nature of a problem that requires the use of a buffer. What are some well-known problems in the real world that can see great benefits from using a buffer? Also, a little background or explanation as to why the problem benefits from using a buffer, and how the buffer would be implemented, would be insightful for understanding the concept!

    Read the article

  • Handling optional GPL dependencies

    - by pmr
    Assume I have a library A which is licensed under a two-clause Free BSD style license. Library A optionally depends on library B (the availability of the dependency is configured at build-time), which is licensed under the GPLv3. If I distribute both bundled together, the license will need to be GPL. But am I still able to distribute library A under the FreeBSD license? How do I indicate that the license changes, when the use of library B is enabled? Do I need to distribute two different versions or can I just have one that contains both licenses and states which applies under which conditions? Any example project I can have a look at to see it done?

    Read the article

  • Bash: Handling Command Not Found

    <b>Linux Journal:</b> "After a recent O/S version upgrade (to openSUSE 11.2) I noticed that bash started being a bit more intelligent when I did something stupid: it started giving me a useful error message..."

    Read the article

  • Agile Tools For Handling Multiple Projects

    - by f1dave
    Currently I'm leading our agile team in an iteration manager role as well as doing my regular dev work. One of the difficulties I'm facing as an IM is tracking burn-down/burn-up; not because I can't produce graphs, but because there's multiple projects that this team is working on at one time. At present I have an excel workbook with sheets that contain a whole bunch of graphs, both at an overall team and by-project level. It's clunky and I spend more time tweaking formulas and double checking calculations than I'd really like. As such, I'm interested to know if anyone has used a tool that can effectively produce these sorts of reports, burn-downs, and predictions across multiple projects. I've seen http://www.pivotaltracker.com/ do some nice things, and of course there's JIRA/Greenhopper, but I'm not aware of those being used to track the progress of multiple projects within one team. If anyone's got an idea of some tools, or has faced a similar problem before, I'd love to hear from you.

    Read the article

  • handling multiple interviews / offers [closed]

    - by farble1670
    What's the best way to handle a situation where you have, or expect to have multiple offers? The ideal situation is that your several offers come in about the same time, and you make a choice. this is not how it happens though. You may have an offer, and several near-final interviews lined up for the following days or weeks. One way to handle it would be to ask for a longer time to decide on the first offers you receive. 2 weeks? This gives time to rush the rest of the things you have going through to an end. i question whether asking for 2 weeks to decide is reasonable though. My guess is that an employer would see through that and force your hand. Another way to handle it would be to accept the first offer, and ask for a reasonable period before your start date, then simply "quit" the first position before you ever start if something better comes along. On one hand, employment is at-will, and employers exercise this fact regularly. On the other hand, it seems morally the wrong thing, and has the potential to burn some bridges. And of course the last option is to simply evaluate each offer in isolation, and accept or reject within the given time frame. any thoughts?

    Read the article

  • How can I handle this string concatenation in C in a reusable way

    - by hyphen this
    I've been writing a small C application that operates on files, and I've found that I have been copy+pasting this code around my functions: char fullpath[PATH_MAX]; fullpath[0] = '\0'; strcat(fullpath, BASE_PATH); strcat(fullpath, subdir); strcat(fullpath, "/"); strcat(fullpath, filename); // do something with fullpath... Is there a better way? The first thought that comes to mind is to create a macro but I'm sure this is a common problem in C, and I'm wondering how others have solve it.

    Read the article

  • Handling (many) multiple projects in Git in an enterprise environment

    - by Michael K
    One of the advantages of older version control systems such as CVS and SVN in enterprise development is that anyone can connect to source control and see all the projects that the company has. This can make it easier to get a high level view of what kid of development is happening outside your sprint and also keeps everything in one place and easy to find. However, distributed version control systems (Git, specifically) use the repository as their base unit. They work best with one project (or several closely related projects) per repository. This makes repository management more difficult in most enterprise environments where it is not unusual to have more than 25-50 projects to support. As far as I have been able to determine, you have to keep a list somewhere else of all the repos you have. There is software available, like GitHub, that help, but that still is an extra step beyond a single connection string and listing the contents of the repository. What is the best way to deal with the complexity of multiple repositories?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >