Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 595/701 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Member function overloading/template specialization issue

    - by Ferruccio
    I've been trying to call the overloaded table::scan_index(std::string, ...) member function without success. For the sake of clarity, I have stripped out all non-relevant code. I have a class called table which has an overloaded/templated member function named scan_index() in order to handle strings as a special case. class table : boost::noncopyable { public: template <typename T> void scan_index(T val, std::function<bool (uint recno, T val)> callback) { // code } void scan_index(std::string val, std::function<bool (uint recno, std::string val)> callback) { // code } }; Then there is a hitlist class which has a number of templated member functions which call table::scan_index(T, ...) class hitlist { public: template <typename T> void eq(uint fieldno, T value) { table* index_table = db.get_index_table(fieldno); // code index_table->scan_index<T>(value, [&](uint recno, T n)->bool { // code }); } }; And, finally, the code which kicks it all off: hitlist hl; // code hl.eq<std::string>(*fieldno, p1.to_string()); The problem is that instead of calling table::scan_index(std::string, ...), it calls the templated version. I have tried using both overloading (as shown above) and a specialized function template (below), but nothing seems to work. After staring at this code for a few hours, I feel like I'm missing something obvious. Any ideas? template <> void scan_index<std::string>(std::string val, std::function<bool (uint recno, std::string val)> callback) { // code }

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • Move to php in windows? Concern, hints, "please don't do!"?

    - by Daniel
    I am considering to move frome Microsoft languages to PHP (just for web dev) which has quite an interesting syntax, a perlish look (but a wider programmer base) and it allows me to reuse the web without reinventing it. I have some concerns too. I would be more than happy to gather some wisdom from stackoverflow community, (challenge to my opinions warmly welcome). Here are my doubts. Efficiency. Cgi are slow, what I am supposed to use? Fastcgi? Or what else? Efficiency + stability. Is PHP on windows really stable and a good choice in terms of performances? Database. I use very often MSSQL (I regret, i like it). Could I widely and efficiently interface PHP with MSSQL (using smartly stored pro, for example). XSLT + XML performance. I work quite a lot with XML and XSLT and I really find the MS xml parser a great software component. Are parser used in PHP fast, reliable and efficient (I am interested mainly in DOM, not SAX)? Objects. Is the PHP object programming model valid end efficient? 6 Regex. How efficient is PHP processing regexp? Many thanks for your advices.

    Read the article

  • Quickly or concisely determine the longest string per column in a row-based data collection

    - by ccornet
    Judging from the failure of my last inquiry, I need to calculate and preset the widths of a set of columns in a table that is being made into an Excel file. Unfortunately, the string data is stored in a row-based format, but the widths must be calculated in a column-based format. The data for the spreadsheets are generated from the following two collections: var dictFiles = l.Items.Cast<SPListItem>().GroupBy(foo => foo.GetSafeSPValue("Category")).ToDictionary(bar => bar.Key); StringDictionary dictCols = GetColumnsForItem(l.Title); Where l is an SPList whose title determines which columns are used. Each SPListItem corresponds to a row of data, which are sorted into separate worksheets based on Category (hence the dictionary). The second line is just a simple StringDictionary that has the column name (A, B, C, etc.) as a key and the corresponding SPListItme field display name as the corresponding value. So for each Category, I enumerate through dictFiles[somekey] to get all the rows in that sheet, and get the particular cell data using SPListItem.Fields[dictCols[colName]]. What I am asking is, is there a quick or concise method, for any one dictFiles[somekey], to retrieve a readout of the longest string in each column provided by dictCols? If it is impossible to get both quickness and conciseness, I can settle for either (since I always have the O(n*m) route of just enumerating the collection and updating an array whenever strCurrent.Length strLongest.Length). For example, if the goal table was the following... Item# Field1 Field2 Field3 1 Oarfish Atmosphere Pretty 2 Raven Radiation Adorable 3 Sunflower Flowers Cute I'd like a function which could cleanly take the collection of items 1, 2, and 3 and output in the correct order... Sunflower, Atmosphere, Adorable Using .NET 3.5 and C# 3.0.

    Read the article

  • Magento layout related basic Questions ?

    - by user197992
    Here is what I think about Magento (plz correct me if I am wrong ) 1)Each module has its own layout.xml stored in /interface/theme/layouts/ folder. 2)Magento loads all these layouts for current theme and creates a big xml file. Now what I am confused at . a)If magento loads /default/default/ (interface & theme) then why all the templates & layouts are inside base/default/ ?? b)what if I create my module name “page” inside my namespace “Jason” i.e Jason_Page , now what will happen to blocks in layout files which are named c)Since all the layouts are loaded and merged into one big xml file , then what happen to all those reference blocks which have same name attribute and are inside “Default” handle tag ? e.g d)what is Local.xml layout for and its use ?? e)wats the relation ship between a module name foo , and its layout name foo.xml ? What will happen to layout.xml if two modules with same name exist in diff namespace ? Thanks in advance .

    Read the article

  • Ruby on Rails login using legacy user database

    - by ricsmania
    Hello, I have a Rails application that connects to a legacy database (Oracle) and displays some information from a particular user. Right now the user is passed as a URL parameter, but this has obvious security issues because users should only be able to see their own data. To solve that, I want to implement a user login, and I did some research and came across 2 components for that, restful_authentication and authlogic. The problem is that I need to use an existing user/password database instead of creating a new one, which is the common way to use those components. The password is encrypted by a custom Oracle package, but let's assume it is stored as plain text to make things simpler. I only need very basic functionality, which is login a user and keep them logged in forever until logout. No changes to the database will be made by this application, so there's no need for sign up, e-mail activation, reset password, etc. Can someone point me in the right direction on how to do that? Is any of those 2 components a good solution? If not, what would be recommended? Thanks!

    Read the article

  • Can TFS workspaces be used without being tied to a specific machine?

    - by GWLlosa
    So I've got a situation where we have a project with 10 developers. Each developer, when they come in for the day, is randomly issued a machine to use for development that day. The machine names are different, say DEV01 - DEV10. At the time that they are issued to the developers, the machines are identical, and no changes the developers make during the day are persisted on the machines (source code changes are stored in TFS, not locally). These are of course actually virtual machines, but that's not really relevant to the point at hand. The problem is that each morning, the developers run into 3 issues: 1) The machine that they are assigned may not be the same machine they were last assigned to. For example, DevMan A might have used DEV04 yesterday, and received DEV06 today. His workspace definitions are now tied to DEV06; he must create a new workspace, or migrate the old workspace to DEV04. 2) The machine that they are assigned may have been in use yesterday, and some of the mappings may conflict. For example, DevMan A might have DEV04 today, and wish to create a workspace mapping the project folder to "C:\MyProj\Solution". However, DevMan B had DEV04 yesterday, and he used the same project folder. TFS now complains. 3) This may be the first time they are on a given machine. They now need to recreate for this machine all of their source-control mappings for the new machine. All of these issues can be resolved in a straightforward fashion on a case-by-case basis, but it does sap some productivity from the morning. We'd much prefer if the TFS workspace definitions could be 'relaxed', such that they did not include the machine name in the definition somehow. Barring that, if anyone is aware of a solution to the above problems that can run automatically, or with limited user intervention, that would also be ideal.

    Read the article

  • Proper API Design for Version Independence?

    - by Justavian
    I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API. The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps. I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas. My plan would be to essentially have three dlls. APIServer_1_0.dll - this would be the dll with all of the dependencies. APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution. APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to. I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls). While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.

    Read the article

  • Continuous Flash music player while navigating site

    - by phx-zs
    I have a site that includes a Flash music player integrated into the layout. I want users to be able to navigate around the site without interrupting the music. I've done plenty of research and thinking and the following are the options I came up with (keeping in mind I want to be as SEO friendly as possible). Anyone have another idea? AJAX: I set up a version that changes the main content div to whatever nav link they click, thereby not interrupting the Flash player. I set it up in the proper search-engine-friendly manner with direct links and JQuery/Ajax functions. If someone goes to site.com/ and clicks the Contact nav link, it loads what's in the main content div on site.com/contact.php into the main content div and changes the URL bar to site.com/#Contact. The same goes for if they go to site.com/contact.php and click About in the nav, it loads the About content and changes the URL bar to site.com/contact.php#About. Obviously this opens up a whole new can of worms with AJAX and hash navigation/history issues, and I would end up with people possibly linking to things like site.com/contact.php#About (which I think looks terrible and can't be too great for SEO). Store the Flash player vars somewhere and reload them with the page: I'm not sure how to go about this, but I thought about keeping my regular navigation without AJAX and have it so when a user clicks a nav link, before it changes pages it stores the Flash player vars (current song and song position) somewhere, then loads them into Flash when the new page loads. Something with an iframe? Good alternative to a Flash player that will work for this type of application? Thanks!

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • Zend XML parsing

    - by Vincent
    I have an xml file called error.xml like this: <?xml version="1.0" encoding="UTF-8"?> <errorList> <error> <code>0</code> <desc>Fault</desc> <ufmessage>Fault</ufmessage> </error> <error> <code>1</code> <desc>Unknown</desc> <ufmessage>Unknown</ufmessage> </error> <error> <code>2</code> <desc>Internal Error</desc> <ufmessage>Internal Error</ufmessage> </error> </errorList> I am using Zend and have the above xml file stored in a Zend Registry variable called "apperrors" like this: $apperrors = new Zend_Config_Xml( 'error.xml'); Zend_Registry::set('apperrors', $apperrors->errorList); If my code throws an error code of 2, I want a snippet of code to check the error code with "code" key in "apperrors" array and echo the ufmessage value corresponding to it. How can I do this? Ex: //Error thrown by the code. Hardcoded for now... $errorThrownByCode = 2; //Get the Zend Registry value for all errors $errorList = Zend_Registry::get('apperrors'); //Write some code to compare $errorThrownByCode with $errorList->error->code //If found echo $errorList->error->ufmessage, else echo "An Unknown error occured"; Thanks

    Read the article

  • How to Store and Retrieve Images Using SQL Server (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into a SQL Server database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the SQL Server environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated.

    Read the article

  • can bind successfully to the ldap server, but needs to know how to find user w/i AD

    - by Brad
    I create a login form to bind to the ldap server, if successful, it creates a session (which the user's username is stored within), then I go to another page that has session_start(); and it works fine. What I want to do now, is add code to test if that user is a member of a specific group. So in theory, this is what I want to do if(username session is valid) { search ldap for user -> get list of groups user is member of foreach(group they are member of) { switch(group) { case STAFF: print 'they are member of staff group'; $access = true; break; default: print 'not a member of STAFF group'; $access = false; break; } if(group == STAFF) { break; } } if($access == TRUE) { // you have access to the content on this page } else { // you do not have access to this page } } How do I do a ldap_search w/o binding? I don't want to keep asking for their password on each page, and I can't pass their password thru a session. Any help is appreciated.

    Read the article

  • Thread mutex behaviour

    - by Alberteddu
    Hi there, I'm learning C. I'm writing an application with multiple threads; I know that when a variable is shared between two or more threads, it is better to lock/unlock using a mutex to avoid deadlock and inconsistency of variables. This is very clear when I want to change or view one variable. int i = 0; /** Global */ static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; /** Thread 1. */ pthread_mutex_lock(&mutex); i++; pthread_mutex_unlock(&mutex); /** Thread 2. */ pthread_mutex_lock(&mutex); i++; pthread_mutex_unlock(&mutex); This is correct, I think. The variable i, at the end of the executions, contains the integer 2. Anyway, there are some situations in which I don't know exactly where to put the two function calls. For example, suppose you have a function obtain(), which returns a global variable. I need to call that function from within the two threads. I have also two other threads that call the function set(), defined with a few arguments; this function will set the same global variable. The two functions are necessary when you need to do something before getting/setting the var. /** (0) */ /** Thread 1, or 2, or 3... */ if(obtain() == something) { if(obtain() == somethingElse) { // Do this, sometimes obtain() and sometimes set(random number) (1) } else { // Do that, just obtain(). (2) } } else { // Do this and do that (3) // If # of thread * 3 > 10, then set(3*10) For example. (4) } /** (5) */ Where I have to lock, and where I have to unlock? The situation can be, I think, even more complex. I will appreciate an exhaustive answer. Thank you in advance. —Alberto

    Read the article

  • In Emacs, how can I use imenu more sensibly with C#?

    - by Cheeso
    I've used emacs for a long time, but I haven't been keeping up with a bunch of features. One of these is speedbar, which I just briefly investigated now. Another is imenu. Both of these were mentioned in in-emacs-how-can-i-jump-between-functions-in-the-current-file? Using imenu, I can jump to particular methods in the module I'm working in. But there is a parse hierarchy that I have to negotiate before I get the option to choose (with autocomplete) the method name. It goes like this. I type M-x imenu and then I get to choose Using or Types. The Using choice allows me to jump to any of the using statements at the top level of the C# file (something like imports statements in a Java module, for those of you who don't know C#). Not super helpful. I choose Types. Then I have to choose a namespace and a class, even though there is just one of each in the source module. At that point I can choose between variables, types, and methods. If I choose methods I finally get the list of methods to choose from. The hierarchy I traverse looks like this; Using Types Namespace Class Types Variables Methods method names Only after I get to the 5th level do I get to select the thing I really want to jump to: a particular method. Imenu seems intelligent about the source module, but kind of hard to use. Am I doing it wrong?

    Read the article

  • Cast base class object to derived class

    - by Popgalop
    Lets say I have two classes, animal and dog like this. class Animal { }; class Dog : public Animal { }; And I have an animal object named animal, that is actually an instance of dog, how would I cast it back to dog? This may seem like an odd question, but I need it because I am writing a programming language interpreter, and on the stack everything is stored as a BaseObject, and all the other datatypes extend BaseObject. How would I cast the base object from the stack, to a specific data type? I have tried something like this Dog dog = static_cast<Dog>(animal); But it gives me an error 1>------ Build started: Project: StackTests, Configuration: Debug Win32 ------ 1> StackTests.cpp 1>c:\users\owner\documents\visual studio 2012\projects\stacktests\stacktests\stacktests.cpp(173): error C2440: 'static_cast' : cannot convert from 'Animal' to 'Dog' 1> No constructor could take the source type, or constructor overload resolution was ambiguous 1>c:\users\owner\documents\visual studio 2012\projects\stacktests\stacktests\stacktests.cpp(173): error C2512: 'Dog' : no appropriate default constructor available ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Parse files in directory/insert in database

    - by jakesankey
    Hey there, Here is my dillema... I have a directory full of .txt comma delimited files arranged as shown below. What I want to do is to import each of these into a SQL or SQLite database, appending each one below the last. (1 table)... I am open to C# or VB scripting and just not sure how to accomplish this. I want to only extract and import the data starting BELOW the 'Feat. Type,Feat. Name, etc' line. These are stored in a \mynetwork\directory\stats folder on my network drive. Ideally I will be able to add functionality that will make the software/script know not to re-add the file to the database once it has already done so as well. Any guidance or tips is appreciated! $$ SAMPLE= $$ FIXTURE=- $$ OPERATOR=- $$ INSPECTION PROCESS=CMM #4 $$ PROCESS OPERATION=- $$ PROCESS SEQUENCE=- $$ TRIAL=- Feat. Type,Feat. Name,Value,Actual,Nominal,Dev.,Tol-,Tol+,Out of Tol.,Comment Point,_FF_PLN_A_1,X,-17.445,-17.445,0.000,-999.000,999.000,, Point,_FF_PLN_A_1,Y,-195.502,-195.502,0.000,-999.000,999.000,, Point,_FF_PLN_A_1,Z,32.867,33.500,-0.633,-0.800,0.800,, Point,_FF_PLN_A_2,X,-73.908,-73.908,0.000,-999.000,999.000,, Point,_FF_PLN_A_2,Y,-157.957,-157.957,0.000,-999.000,999.000,, Point,_FF_PLN_A_2,Z,32.792,33.500,-0.708,-0.800,0.800,, Point,_FF_PLN_A_3,X,-100.180,-100.180,0.000,-999.000,999.000,, Point,_FF_PLN_A_3,Y,-142.797,-142.797,0.000,-999.000,999.000,, Point,_FF_PLN_A_3,Z,32.768,33.500,-0.732,-0.800,0.800,, Point,_FF_PLN_A_4,X,-160.945,-160.945,0.000,-999.000,999.000,, Point,_FF_PLN_A_4,Y,-112.705,-112.705,0.000,-999.000,999.000,, Point,_FF_PLN_A_4,Z,32.719,33.500,-0.781,-0.800,0.800,, Point,_FF_PLN_A_5,X,-158.096,-158.096,0.000,-999.000,999.000,, Point,_FF_PLN_A_5,Y,-73.821,-73.821,0.000,-999.000,999.000,, Point,_FF_PLN_A_5,Z,32.756,33.500,-0.744,-0.800,0.800,, Point,_FF_PLN_A_6,X,-195.670,-195.670,0.000,-999.000,999.000,, Point,_FF_PLN_A_6,Y,-17.375,-17.375,0.000,-999.000,999.000,, Point,_FF_PLN_A_6,Z,32.767,33.500,-0.733,-0.800,0.800,, Point,_FF_PLN_A_7,X,-173.759,-173.759,0.000,-999.000,999.000,, Point,_FF_PLN_A_7,Y,14.876,14.876,0.000,-999.000,999.000,,

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • PHP / Zend custom date input formats

    - by mld
    Hi, I'm creating a library component for other developers on my team using PHP and Zend. This component needs to be able to take as input a date (string) and another string telling it the format of that date. Looking through the Zend documentation and examples, I thought I found the solution - $dateObject = Zend_Date('13.04.2006', array('date_format' => 'dd.MM.yyyy')); this line, however, throws an error - call to undefined function. So instead I tried this - $dt = Zend_Locale_Format::getDate('13.04.2006', array('date_format' => 'dd.MM.yyyy')); This gets the job done, but throws an error if a date that is entered isn't valid. The docs make it look like you can use the isDate() function to check validity - Zend_Date::isDate('13.04.2006', array('date_format' => 'dd.MM.yyyy')) but this line always returns false. So my questions - Am I going about this the right way? If not, is there a better way to handle this via Zend or straight PHP? If I do use Zend_Locale_Format::getDate(), do I need to worry about a locale being set elsewhere and changing the results of the call? I'm locked into PHP 5.2.6 on windows, btw... so 5.3+ functions & strptime() are out.

    Read the article

  • str_replace() and strpos() for arrays?

    - by Josh
    I'm working with an array of data that I've changed the names of some array keys, but I want the data to stay the same basically... Basically I want to be able to keep the data that's in the array stored in the DB, but I want to update the array key names associated with it. Previously the array would have looked like this: $var_opts['services'] = array('foo-1', 'foo-2', 'foo-3', 'foo-4'); Now the array keys are no longer prefixed with "foo", but rather with "bar" instead. So how can I update the array variable to get rid of the "foos" and replace with "bars" instead? Like so: $var_opts['services'] = array('bar-1', 'bar-2', 'bar-3', 'bar-4'); I'm already using if(isset($var_opts['services']['foo-1'])) { unset($var_opts['services']['foo-1']); } to get rid of the foos... I just need to figure out how to replace each foo with a bar. I thought I would use str_replace on the whole array, but to my dismay it only works on strings (go figure, heh) and not arrays.

    Read the article

  • C++ variable to const expression

    - by user1344784
    template <typename Real> class A{ }; template <typename Real> class B{ }; //... a few dozen more similar template classes class Computer{ public slots: void setFrom(int from){ from_ = from; } void setTo(int to){ to_ = to; } private: template <int F, int T> void compute(){ using boost::fusion::vector; using boost::fusion::at_c; vector<A<float>, B<float>, ...> v; at_c<from_>(v).operator()(at_c<to_>(v)); //error; needs to be const-expression. }; This question isn't about Qt, but there is a line of Qt code in my example. The setFrom() and setTo() are functions that are called based on user selection via the GUI widget. The root of my problem is that 'from' and 'to' are variables. In my compute member function I need to pick a type (A, B, etc.) based on the values of 'from' and 'to'. The only way I know how to do what I need to do is to use switch statements, but that's extremely tedious in my case and I would like to avoid. Is there anyway to convert the error line to a constant-expression?

    Read the article

  • Problem with incomplete input when using Attoparsec

    - by Dan Dyer
    I am converting some functioning Haskell code that uses Parsec to instead use Attoparsec in the hope of getting better performance. I have made the changes and everything compiles but my parser does not work correctly. I am parsing a file that consists of various record types, one per line. Each of my individual functions for parsing a record or comment works correctly but when I try to write a function to compile a sequence of records the parser always returns a partial result because it is expecting more input. These are the two main variations that I've tried. Both have the same problem. items :: Parser [Item] items = sepBy (comment <|> recordType1 <|> recordType2) endOfLine For this second one I changed the record/comment parsers to consume the end-of-line characters. items :: Parser [Item] items = manyTill (comment <|> recordType1 <|> recordType2) endOfInput Is there anything wrong with my approach? Is there some other way to achieve what I am attempting?

    Read the article

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >