Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 594/701 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • OOP/MVC advice on where to place a global helper function

    - by franko75
    Hi, I have a couple of controllers on my site which are handling form data. The forms use AJAX and I have quite a few methods across different controllers which are having to do some specific processing to return errors in a JSON encoded format - see code below. Obviously this isn't DRY and I need to move this code into a single helper function which I can use globally, but I'm wondering where this should actually go! Should I create a static helper class which contains this function (e.g Validation::build_ajax_errors()), or as this code is producing a format which is application specific and tied into the jQuery validation plugin I'm using, should it be a static method stored in, for example, my main Website controller which the form handling controllers extend from? //if ajax request, output errors if (request::is_ajax()) { //need to build errors into array form for javascript validation - move this into a helper method accessible globally $errors = $post->errors('form_data/form_error_messages'); $i = 0; $new_errors = array(); foreach ($errors as $key => $value) { $new_errors[$i][0] = '#' . $key; $new_errors[$i][1] = $value; $new_errors[$i][2] = "error"; $i++; } echo '{"jsonValidateReturn":' . json_encode($new_errors) . '}'; return; }

    Read the article

  • How to deal with a Java serialized object whose package changed?

    - by Alex
    I have a Java class that is stored in an HttpSession object that's serialized and transfered between servers in a cluster environment. For the purpose of this explanation, lets call this class "Person". While in the process of improving the code, this class was moved from "com.acme.Person" to "com.acme.entity.Person". Internally, the class remains exactly the same (same fields, same methods, same everything). The problem is that we have two sets of servers running the old code and the new code at the same time. The servers with the old code have serialized HttpSession object and when the new code unserializes it, it throws a ClassNotFoundException because it can't find the old reference to com.acme.Person. At this point, it's easy to deal with this because we can just recreate the object using the new package. The problem then becomes that the HttpSession in the new servers, will serialize the object with the new reference to com.acme.entity.Person, and when this is unserialized in the servers running the old code, another exception will be thrown. At this point, we can't deal with this exception anymore. What's the best strategy to follow for this kind of cases? Is there a way to tell the new servers to serialize the object with the reference to the old package and unserialize references to the old package to the new one? How would we transition to using the new package and forgetting about the old one once all servers run the new code?

    Read the article

  • In Emacs, how can I use imenu more sensibly with C#?

    - by Cheeso
    I've used emacs for a long time, but I haven't been keeping up with a bunch of features. One of these is speedbar, which I just briefly investigated now. Another is imenu. Both of these were mentioned in in-emacs-how-can-i-jump-between-functions-in-the-current-file? Using imenu, I can jump to particular methods in the module I'm working in. But there is a parse hierarchy that I have to negotiate before I get the option to choose (with autocomplete) the method name. It goes like this. I type M-x imenu and then I get to choose Using or Types. The Using choice allows me to jump to any of the using statements at the top level of the C# file (something like imports statements in a Java module, for those of you who don't know C#). Not super helpful. I choose Types. Then I have to choose a namespace and a class, even though there is just one of each in the source module. At that point I can choose between variables, types, and methods. If I choose methods I finally get the list of methods to choose from. The hierarchy I traverse looks like this; Using Types Namespace Class Types Variables Methods method names Only after I get to the 5th level do I get to select the thing I really want to jump to: a particular method. Imenu seems intelligent about the source module, but kind of hard to use. Am I doing it wrong?

    Read the article

  • Why does this simple MySQL procedure take way too long to complete?

    - by Howard Guo
    This is a very simple MySQL stored procedure. Cursor "commission" has only 3000 records, but the procedure call takes more than 30 seconds to run. Why is that? DELIMITER // DROP PROCEDURE IF EXISTS apply_credit// CREATE PROCEDURE apply_credit() BEGIN DECLARE done tinyint DEFAULT 0; DECLARE _pk_id INT; DECLARE _eid, _source VARCHAR(255); DECLARE _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count DECIMAL; DECLARE commission CURSOR FOR SELECT pk_id, eid, source, lh_revenue, acc_revenue, project_carrier_expense, carrier_lh, carrier_acc, gross_margin, fsc_revenue, revenue, load_count FROM ct_sales_commission; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; DELETE FROM debug; OPEN commission; REPEAT FETCH commission INTO _pk_id, _eid, _source, _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count; INSERT INTO debug VALUES(concat('row ', _pk_id)); UNTIL done = 1 END REPEAT; CLOSE commission; END// DELIMITER ; CALL apply_credit(); SELECT * FROM debug;

    Read the article

  • assembling an object graph without an ORM -- in the service layer or data layer?

    - by Hans Gruber
    At my current gig, our persistence layer uses IBatis going against SQL Server stored procedures (puke). IMHO, this approach has many disadvantages over the use of a "true" ORM such NHibernate or EF, but the one I'm trying to address here revolves around all the boilerplate code needed to map data from a result set into an object graph. Say I have the following DTO object graph I want to return to my presentation layer: IEnumerable<CustomerDTO> |--> IEnumerable<AddressDTO> |--> LatestOrderDTO The way I've implemented this is to have a discrete method in my DAO class to return each IEnumerable<*DTO>, and then have my service class be responsible for orchestrating the calls to the DAO. It then returns the fully assembled object graph to the client: public class SomeService(){ public SomeService(IDao someDao){ this._someDao = someDao; } public IEnumerable<CustomerDTO> ListCustomersForHistory(int brokerId){ var customers = _someDao.ListCustomersForBroker(brokerId); foreach (customer in customers){ customer.Addresses = someDao.ListCustomersAddresses(brokerId); customer.LatestOrder = someDao.GetCustomerLatestOrder(brokerId); } } return customers; } My question is should this logic belong in the service layer or the should I make my DAO such that it instead returns the assembled object graph. If I was using NHibernate, I assume that this kind of relationship association between objects comes for "free"?

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • How can I concisely copy multiple SQL rows, with minor modifications?

    - by Steve Jessop
    I'm copying a subset of some data, so that the copy will be independently modifiable in future. One of my SQL statements looks something like this (I've changed table and column names): INSERT Product( ProductRangeID, Name, Weight, Price, Color, And, So, On ) SELECT @newrangeid AS ProductRangeID, Name, Weight, Price, Color, And, So, On FROM Product WHERE ProductRangeID = @oldrangeid and Color = 'Blue' That is, we're launching a new product range which initially just consists of all the blue items in some specified current range, under new SKUs. In future we may change the "blue-range" versions of the products independently of the old ones. I'm pretty new at SQL: is there something clever I should do to avoid listing all those columns, or at least avoid listing them twice? I can live with the current code, but I'd rather not have to come back and modify it if new columns are added to Product. In its current form it would just silently fail to copy the new column if I forget to do that, which should show up in testing but isn't great. I am copying every column except for the ProductRangeID (which I modify), the ProductID (incrementing primary key) and two DateCreated and timestamp columns (which take their auto-generated values for the new row). Btw, I suspect I should probably have a separate join table between ProductID and ProductRangeID. I didn't define the tables. This is in a T-SQL stored procedure on SQL Server 2008, if that makes any difference.

    Read the article

  • avoiding code duplication in Rails 3 models

    - by Dustin Frazier
    I'm working on a Rails 3.1 application where there are a number of different enum-like models that are stored in the database. There is a lot of identical code in these models, as well as in the associated controllers and views. I've solved the code duplication for the controllers and views via a shared parent controller class and the new view/layout inheritance that's part of Rails 3. Now I'm trying to solve the code duplication in the models, and I'm stuck. An example of one of my enum models is as follows: class Format < ActiveRecord::Base has_and_belongs_to_many :videos attr_accessible :name validates :name, presence: true, length: { maximum: 20 } before_destroy :verify_no_linked_videos def verify_no_linked_videos unless self.videos.empty? self.errors[:base] << "Couldn't delete format with associated videos." raise ActiveRecord::RecordInvalid.new self end end end I have four or five other classes with nearly identical code (the association declaration being the only difference). I've tried creating a module with the shared code that they all include (which seems like the Ruby Way), but much of the duplicate code relies on ActiveRecord, so the methods I'm trying to use in the module (validate, attr_accessible, etc.) aren't available. I know about ActiveModel, but that doesn't get me all the way there. I've also tried creating a common, non-persistent parent class that subclasses ActiveRecord::Base, but all of the code I've seen to accomplish this assumes that you won't have subclasses of your non-persistent class that do persist. Any suggestions for how best to avoid duplicating these identical lines of code across many different enum models?

    Read the article

  • How to prevent multiple definitions in C?

    - by Jordi
    I'm a C newbie and I was just trying to write a console application with Code::Blocks. Here's the (simplified) code: main.c: #include <stdio.h> #include <stdlib.h> #include "test.c" // include not necessary for error in Code::Blocks int main() { //t = test(); // calling of method also not necessary return 0; } test.c: void test() {} When I try to build this program, it gives the following errors: *path*\test.c|1|multiple definition of `_ test'| obj\Debug\main.o:*path*\test.c|1|first defined here| There is no way that I'm multiply defining test (although I don't know where the underscore is coming from) and it seems highly unlikely that the definition is somehow included twice. This is all the code there is. I've ruled out that this error is due to some naming conflict with other functions or files being called test or test.c. Note that the multiple and the first definition are on the same line in the same file. Does anyone know what is causing this and what I can do about it? Thanks!

    Read the article

  • My treeview Data is Not changing.

    - by Vibin Jith
    Hai , I am trying to display the user permission in a treeview.For each user permissions are stored in the database as xml file. In the page, a Combo box used to list the users and a treeView used to bind the Permission xml. When the user get selected in the combo box , i took the xml from the database and connect with xmlDatasource and bind with the treeView. What happening is , First time the TreeView fill with the xml nodes and another time it will not work. For the first selection it's ok. Anothers selections are not effected by the treeview. The code is debugging. No problem. Can you just tell ,why the treeview datasource is not updating. I used this code .. Dim permissionRoot = From permissionNode In MyUser.UserPermissionXml.Root.Elements("menuNode") XmlTreeViewSource.Data = permissionRoot(0).ToString trvPermission.DataSource = XmlTreeViewSource trvPermission.DataBind() SetPermission(trvPermission.Nodes(0)) The markups <asp:TreeView ID="trvPermission" runat="server" ExpandDepth="2" ShowCheckBoxes="All" ShowLines="True" ForeColor="#005782" > <DataBindings> <asp:TreeNodeBinding DataMember="menuNode" TextField="title" ValueField="value" /> </DataBindings> </asp:TreeView>

    Read the article

  • Toolbox/framework to construct lightweight public-facing web site

    - by aSteve
    I am aware of full-blown content management systems (CMS) such as SugarCRM and TikiWiki... where content is typically stored in a database... and edited through the same interface as it is published. While I like many of the features, the product is clearly aimed at enterprise-wide use rather than to be public-facing. What I'd like to establish are potential alternatives that fill the space between full-blown CMS and hand-coded bespoke site. I like the way that I can add modules to my CMS... allowing me to quickly introduce new functionality, and I'd like an analogous feature in a system for public web-content. Modules I know I'd like include moderated comments; web-form-to-email gateway; menus/tabs... in future, perhaps mapping or diaries or RSS integration - etc. Where my requirements differ from a CMS, I don't need (or want) most content to be editable through the main site... and, somehow, I do want to be able to preview how updates will be presented to the public rather than to make live changes. For these purposes, in contrast to those where a typical CMS would be ideal, presentation is of paramount importance - and trumps any desire to immediately disseminate information. I realise that this is a very high-level question... (suggestions of additional tags welcome) - I mentioned PHP only as - ideally - I'm looking for an open source solution and a PHP deployment is an easy option. What are my options?

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Best way to sign data in web form with user certificate

    - by salgiza
    We have a C# web app where users will connect using a digital certificate stored in their browsers. From the examples that we have seen, verifying their identity will be easy once we enable SSL, as we can access the fields in the certificate, using Request.ClientCertificate, to check the user's name. We have also been requested, however, to sign the data sent by the user (a few simple fields and a binary file) so that we can prove, without doubt, which user entered each record in our database. Our first thought was creating a small text signature including the fields (and, if possible, the md5 of the file) and encrypt it with the private key of the certificate, but... As far as I know we can't access the private key of the certificate to sign the data, and I don't know if there is any way to sign the fields in the browser, or we have no other option than using a Java applet. And if it's the latter, how we would do it (Is there any open source applet we can use? Would it be better if we create one ourselves?) Of course, it would be better if there was any way to "sign" the fields received in the server, using the data that we can access from the user's certificate. But if not, any info on the best way to solve the problem would be appreciated.

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • RewriteRule to store thousands of files in subdirectories

    - by Brandon
    I have a website that will have millions of pages in a directory. I'd like to store those files on-disk in a bunch of subdirectories based on the first characters of the page name. For example http://mysite.com/hugedir/somefile.html would be stored in /var/www/html/hugedir/s/o/m/e/f/ile.html That is fairly trivial to do with a RewriteRule like so: RewriteRule ^hugedir/(.)(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}/$6.html RewriteRule ^hugedir/(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}.html RewriteRule ^hugedir/(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}.html RewriteRule ^hugedir/(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}.html RewriteRule ^hugedir/(.)(.*).html /hugedir/{$1}/{$2}.html RewriteRule ^hugedir/(.*).html /hugedir/{$1}.html However, the file name may contain hyphens or other non-standard characters and I'd really like to avoid having a directory named with a strange character. Ideally, I'd like to have a list of 'approved' characters and either eliminate or transform the unapproved characters to an underscore. Can anybody think of a way to do that? Or something else equivalent? Part of the requirement is that these be physical files on disk and it not be parsed with a scripting language.

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • Move to php in windows? Concern, hints, "please don't do!"?

    - by Daniel
    I am considering to move frome Microsoft languages to PHP (just for web dev) which has quite an interesting syntax, a perlish look (but a wider programmer base) and it allows me to reuse the web without reinventing it. I have some concerns too. I would be more than happy to gather some wisdom from stackoverflow community, (challenge to my opinions warmly welcome). Here are my doubts. Efficiency. Cgi are slow, what I am supposed to use? Fastcgi? Or what else? Efficiency + stability. Is PHP on windows really stable and a good choice in terms of performances? Database. I use very often MSSQL (I regret, i like it). Could I widely and efficiently interface PHP with MSSQL (using smartly stored pro, for example). XSLT + XML performance. I work quite a lot with XML and XSLT and I really find the MS xml parser a great software component. Are parser used in PHP fast, reliable and efficient (I am interested mainly in DOM, not SAX)? Objects. Is the PHP object programming model valid end efficient? 6 Regex. How efficient is PHP processing regexp? Many thanks for your advices.

    Read the article

  • Goldbach theory in C

    - by nofe
    I want to write some code which takes any positive, even number (greater than 2) and gives me the smallest pair of primes that sum up to this number. I need this program to handle any integer up to 9 digits long. My aim is to make something that looks like this: Please enter a positive even integer ( greater than 2 ) : 10 The first primes adding : 3+7=10. Please enter a positive even integer ( greater than 2 ) : 160 The first primes adding : 3+157=160. Please enter a positive even integer ( greater than 2 ) : 18456 The first primes adding : 5+18451=18456. I don't want to use any library besides stdio.h. I don't want to use arrays, strings, or anything besides for the most basic toolbox: scanf, printf, for, while, do-while, if, else if, break, continue, and the basic operators (<,, ==, =+, !=, %, *, /, etc...). Please no other functions especially is_prime. I know how to limit the input to my needs so that it loops until given a valid entry. So now I'm trying to figure out the algorithm. I thought of starting a while loop like something like this: #include <stdio.h> long first, second, sum, goldbach, min; long a,b,i,k; //indices int main (){ while (1){ printf("Please enter a positive integer :\n"); scanf("%ld",&goldbach); if ((goldbach>2)&&((goldbach%2)==0)) break; else printf("Wrong input, "); } while (sum!=goldbach){ for (a=3;a<goldbach;a=(a+2)) for (i=2;(goldbach-a)%i;i++) first = a; for (b=5;b<goldbach;b=(b+2)) for (k=2;(goldbach-b)%k;k++) sum = first + second; } }

    Read the article

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • Cast base class object to derived class

    - by Popgalop
    Lets say I have two classes, animal and dog like this. class Animal { }; class Dog : public Animal { }; And I have an animal object named animal, that is actually an instance of dog, how would I cast it back to dog? This may seem like an odd question, but I need it because I am writing a programming language interpreter, and on the stack everything is stored as a BaseObject, and all the other datatypes extend BaseObject. How would I cast the base object from the stack, to a specific data type? I have tried something like this Dog dog = static_cast<Dog>(animal); But it gives me an error 1>------ Build started: Project: StackTests, Configuration: Debug Win32 ------ 1> StackTests.cpp 1>c:\users\owner\documents\visual studio 2012\projects\stacktests\stacktests\stacktests.cpp(173): error C2440: 'static_cast' : cannot convert from 'Animal' to 'Dog' 1> No constructor could take the source type, or constructor overload resolution was ambiguous 1>c:\users\owner\documents\visual studio 2012\projects\stacktests\stacktests\stacktests.cpp(173): error C2512: 'Dog' : no appropriate default constructor available ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Proper API Design for Version Independence?

    - by Justavian
    I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API. The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps. I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas. My plan would be to essentially have three dlls. APIServer_1_0.dll - this would be the dll with all of the dependencies. APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution. APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to. I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls). While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.

    Read the article

  • Member function overloading/template specialization issue

    - by Ferruccio
    I've been trying to call the overloaded table::scan_index(std::string, ...) member function without success. For the sake of clarity, I have stripped out all non-relevant code. I have a class called table which has an overloaded/templated member function named scan_index() in order to handle strings as a special case. class table : boost::noncopyable { public: template <typename T> void scan_index(T val, std::function<bool (uint recno, T val)> callback) { // code } void scan_index(std::string val, std::function<bool (uint recno, std::string val)> callback) { // code } }; Then there is a hitlist class which has a number of templated member functions which call table::scan_index(T, ...) class hitlist { public: template <typename T> void eq(uint fieldno, T value) { table* index_table = db.get_index_table(fieldno); // code index_table->scan_index<T>(value, [&](uint recno, T n)->bool { // code }); } }; And, finally, the code which kicks it all off: hitlist hl; // code hl.eq<std::string>(*fieldno, p1.to_string()); The problem is that instead of calling table::scan_index(std::string, ...), it calls the templated version. I have tried using both overloading (as shown above) and a specialized function template (below), but nothing seems to work. After staring at this code for a few hours, I feel like I'm missing something obvious. Any ideas? template <> void scan_index<std::string>(std::string val, std::function<bool (uint recno, std::string val)> callback) { // code }

    Read the article

  • str_replace() and strpos() for arrays?

    - by Josh
    I'm working with an array of data that I've changed the names of some array keys, but I want the data to stay the same basically... Basically I want to be able to keep the data that's in the array stored in the DB, but I want to update the array key names associated with it. Previously the array would have looked like this: $var_opts['services'] = array('foo-1', 'foo-2', 'foo-3', 'foo-4'); Now the array keys are no longer prefixed with "foo", but rather with "bar" instead. So how can I update the array variable to get rid of the "foos" and replace with "bars" instead? Like so: $var_opts['services'] = array('bar-1', 'bar-2', 'bar-3', 'bar-4'); I'm already using if(isset($var_opts['services']['foo-1'])) { unset($var_opts['services']['foo-1']); } to get rid of the foos... I just need to figure out how to replace each foo with a bar. I thought I would use str_replace on the whole array, but to my dismay it only works on strings (go figure, heh) and not arrays.

    Read the article

  • What's the best way to communicate the purpose of a string parameter in a public API?

    - by Dave
    According to the guidance published in New Recommendations for Using Strings in Microsoft .NET 2.0, the data in a string may exhibit one of the following types of behavior: A non-linguistic identifier, where bytes match exactly. A non-linguistic identifier, where case is irrelevant, especially a piece of data stored in most Microsoft Windows system services. Culturally-agnostic data, which still is linguistically relevant. Data that requires local linguistic customs. Given that, I'd like to know the best way to communicate which behavior is expected of a string parameter in a public API. I wasn't able to find an answer in the Framework Design Guidelines. Consider the following methods: f(string this_is_a_linguistic_string) g(string this_is_a_symbolic_identifier_so_use_ordinal_compares) Is variable naming and XML documentation the best I can do? Could I use attributes in some way to mark the requirements of the string? Now consider the following case: h(Dictionary<string, object> dictionary) Note that the dictionary instance is created by the caller. How do I communicate that the callee expects the IEqualityComparer<string> object held by the dictionary to perform, for example, a case-insensitive ordinal comparison?

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >