Search Results

Search found 29820 results on 1193 pages for 'default implementation'.

Page 397/1193 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • phpUnit - mock php extended exception object

    - by awongh
    I'm testing some legacy code that extends the default php exception object. This code prints out a custom HTML error message. I would like to mock this exception object in such a way that when the tested code generates an exception it will just echo the basic message instead of giving me the whole HTML message. I cannot figure out a way to do this. It seems like you can test for explicit exceptions, but you can't change in a general way the behavior of an exception, and you also can't mock up an object that extends a default php functionality. ( can't think of another example of this beyond exceptions... but it would seem to be the case ) I guess the problem is, where would you attach the mocked object?? It seems like you can't interfere with 'throw new' and this is the place that the object method is called.... Or if you could somehow use the existing phpunit exception functionality to change the exception behavior the way you want, in a general way for all your code... but this seems like it would be hacky and bad....

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Text message intent - catch and send

    - by Espen
    Hi! I want to be able to control incoming text messages. My application is still on a "proof of concept" version and I'm trying to learn Android programming as I go. First my application need to catch incomming text messages. And if the message is from a known number then deal with it. If not, then send the message as nothing has happened to the default text message application. I have no doubt it can be done, but I still have some concern and I see some pitfalls at how things are done on Android. So getting the incomming text message could be fairly easy - except when there are other messaging applications installed and maybe the user wants to have normal text messages to pop up on one of them - and it will, after my application has had a look at it first. How to be sure my application get first pick of incomming text messages? And after that I need to send most text messages through to any other text message application the user has chosen so the user can actually read the message my application didn't need. Since Android uses intents that are relative at best, I don't see how I can enforce my application to get a peek at all incomming text messages, and then stop it or send it through to the default text messaging application...

    Read the article

  • how to use MessageParameterAttribute in wcf

    - by Archie
    hello, I wanted to know what is the use the MessageParameterAttribute in wcf. In my function: [OperationContract] public float GetAirfare( [MessageParameter(Name=”fromCity”)] string originCity, [MessageParameter(Name=”toCity”)] string destinationCity); I dont use fromCity or toCity anywhere in the implementation or even while using a service. Then whats the point in giving it a name?

    Read the article

  • What is the purpose of "do!" notation in F#?

    - by Yacoder
    I'm a beginner in F#, so it's a simple question and maybe a duplicate, but I couldn't find the answer anywhere... I'm reading this LOGO DSL implementation and I don't understand, what is the meaning of the "do!" notation in here: this.Loaded.Add (fun _ -> async { do! Async.Sleep 200 for cmd in theDrawing do do! this.Execute(cmd) } |> Async.StartImmediate ) Can you help?

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • How do I configure multiple Ubuntu Python installations to avoid App Engine's SSL error?

    - by Linc
    I have Karmic Koala which has Python 2.6 installed by default. However I can't run any Python App Engine projects because they require Python 2.5 and python ssl. To install ssl I installed python2.5-dev first while following some instructions I found elsewhere. sudo apt-get install libssl-dev sudo apt-get install python-setuptools sudo apt-get install python2.5-dev sudo easy_install-2.5 pyopenssl However, I am afraid this is not good for my Ubuntu installation since Ubuntu expects to see version 2.6 of Python when you type 'python' on the command line. Instead, it says '2.5.5'. I tried to revert to the original default version of Python by doing this: sudo apt-get remove python2.5-dev But that didn't seem to do anything either - when I type 'python' on the command line it still say 2.5.5. And App Engine still doesn't work after all this. I continue to get an SSL-related error whenever I try to run my Python app: AttributeError: 'module' object has no attribute 'HTTPSHandler' UPDATE: Just checked whether SSL actually installed as a result of those commands by typing this: $ python2.5 Python 2.5.5 (r255:77872, Apr 29 2010, 23:59:20) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import ssl Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ssl >>> As you can see, SSL is still not installed, which explains the continuing App Engine error. If anyone knows how I can dig myself out of this hole, I would appreciate it.

    Read the article

  • why DataColumn AllowDbNull is true even if oracle db does not allow null

    - by matti
    Hi. I have column SomeId in table SomeLink. When I look with tOra or Sql Plus Worksheet both state: tOra: Column name Data type Default Null Comment SOMEID INTEGER {null} NOT NULL {null} Sql Plus: SOMEID NOT NULL NUMBER(38) I have authored a method that's intended to give default values to all NOT NULL fields that don't have values: public static void GetDefaultValuesForNonNullColumns(DataRow row) { foreach(DataColumn col in row.Table.Columns) { if (Convert.IsDBNull(row[col]) && !col.AllowDBNull) { if (ColumnIsNumeric(col.DataType)) row[col] = 0; else if (col.DataType == typeof(DateTime)) row[col] = DateTime.Now; else if (col.DataType == typeof(String)) row[col] = string.Empty; else if (col.DataType == typeof(Char)) row[col] = ' '; else throw new Exception(string.Format("Unsupported column type: {0}", col.DataType)); } } } When SOMEID is handled in loop the AllowDBNull = true. I really can't understand. The table is created in DataSet like this: _someLinkAdptr = _dbFactory.CreateDataAdapter(); _someLinkAdptr.SelectCommand = _dbFactory.CreateCommand(); _someLinkAdptr.SelectCommand.Connection = _cnctn; _someLinkAdptr.SelectCommand.CommandText = GetSomeLinkSelectTxtAndParams(_someLinkAdptr.SelectCommand, UndefinedValue.ToString(), UndefinedValue.ToString()); Select command returns no rows. The idea is that I can then use commandbuilder to get InsertCommand without building it myself. The row is added to dataset's table like this: private static void CreateDocLink(int anId, int anotherId) { DataRow row = _someDataSet.Tables["SomeLink"].NewRow(); row["AnId"] = anId; row["AnotherId"] = anotherId; Utility.GetDefaultValuesForNonNullColumns(row); _someDataSet.Tables["SomeLink"].Rows.Add(row); } When DataAdapter is updated to oracle db I get: ORA-01400: cannot insert NULL into (SOMESCHEMA.SOMELINK.SOMEID) Cheers & BR -Matti

    Read the article

  • Is there a language with native pass-by-reference/pass-by-name semantics, which could be used in mod

    - by Bubba88
    Hi! This is a reopened question. I look for a language and supporting platform for it, where the language could have pass-by-reference or pass-by-name semantics by default. I know the history a little, that there were Algol, Fortran and there still is C++ which could make it possible; but, basically, what I look for is something more modern and where the mentioned value pass methodology is preferred and by default (implicitly assumed). I ask this question, because, to my mind, some of the advantages of pass-by-ref/name seem kind of obvious. For example when it is used in a standalone agent, where copyiong of values is not necessary (to some extent) and performance wouldn't be downgraded much in that case. So, I could employ it in e.g. rich client app or some game-style or standalone service-kind application. The main advantage to me is the clear separation between identity of a symbol, and its current value. I mean when there is no reduntant copying, you know that you're working with the exact symbol/path you have queried/received. And intristing boxing of values will not interfere with the actual logic of program. I know that there is C# ref keyword, but it's something not so intristic, though acceptable. Equally, I realize that pass-by-reference semantics could be simulated in virtually any language (Java as an instant example) and so on.. not sure about pass by name :) What would you recommend - create a something like DSL for such needs wherever it be appropriate; or use some languages that I already know? Maybe, there is something that I'm missing? Thank you!

    Read the article

  • UITouch Events and Table Views

    - by Andy
    I'm working on a navigation-based iPhone-only app that serves two main purposes: One, to present data in a hierarchical view, allowing users to drill down and eventually edit said data, and, two, to all users to perform a default action when the table view cell is tapped. I now need to offer a small set of options tied to the same data; however, both the didSelectRowAtIndexPath: and accessoryButtonTappedForRowAtIndexPath: methods are obviously taken. So, my options seem to be to implement a double-tap method, wherein the small list of additional options would be presented after (you guessed it) a double-tap on said table row; or, preferably, a tap-and-hold method. From what I can tell, tap-and-hold seems like the way to go in SDK 4.0 - which does me no good right this red-hot minute. I decided to go with the double-tap option, but I'm having a little trouble. First and foremost, the touchesBegan:withEvent: method does not seem to be getting called at all; a breakpoint placed within the method is never called while the application runs, and the table view responds exactly as it did before I inserted the method (which is to say, it performs the default action): - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *aTouch = [touches anyObject]; if (aTouch.tapCount == 2) { [NSObject cancelPreviousPerformRequestsWithTarget:self]; } } Second, I don't really need to handle a single-tap - the didSelectRowAtIndexPath: method can handle the single-tap just fine. The double-tap is the funky one I want to handle. I suspect the answer is going to contain the phrase, "You can't have the table view handle the single-tap and the touchesBegan: method handle the double-tap. The touch handling methods have to handle all of them." I would really appreciate some guidance from some of you who've dealt with this issue. Thanks in advance.

    Read the article

  • Unable to ping server from client B but able to ping from client A. Please help

    - by Soundar Rajan
    This is not really a programming question, but I am at my wit's end ... I am trying to configure a IIS 6.0/Windows Server 2003 web server with a ASP.net application. When I try to ping the server from client computer A I get the following: PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping succeeds! From client computer B, BOTH the pings fail. PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping fails with a message "Ping request could not find host 74.208.192.xxx:80" Both clients A and B are on the same subnet. The server is outside (a virtual server hosted by an ISP) I have an ASP.NET application in a virtual directory on the server. In IE or firefox, if I enter http://74.208.192.xxx/subdir/subdir/../Default.aspx, it works from both the clients! The server has default firewall settings but web server enabled (Port 80 is open).

    Read the article

  • How can I properly handle 404s in ASP.NET MVC?

    - by Brian
    I am just getting started on ASP.NET MVC so bear with me. I've searched around this site and various others and have seen a few implementations of this. EDIT: I forgot to mention I am using RC2 Using URL Routing: routes.MapRoute( "Error", "{*url}", new { controller = "Errors", action = "NotFound" } //404s ); The above seems to take care of requests like this (assuming default route tables setup by initial MVC project): "/blah/blah/blah/blah" Overriding HandleUnknownAction() in the controller itself: //404s - handle here (bad action requested protected override void HandleUnknownAction(string actionName) { ViewData["actionName"] = actionName; View("NotFound").ExecuteResult(this.ControllerContext); } However the previous strategies do not handle a request to a Bad/Unknown controller. For example, I do not have a "/IDoNotExist", if I request this I get the generic 404 page from the web server and not my 404 if I use routing + override. So finally, my question is: Is there any way to catch this type of request using a route or something else in the MVC framework itself? OR should I just default to using Web.Config customErrors as my 404 handler and forget all this? I assume if I go with customErrors I'll have to store the generic 404 page outside of /Views due to the Web.Config restrictions on direct access. Anyway any best practices or guidance is appreciated.

    Read the article

  • Calling assignment operator in copy constructor

    - by stas
    Are there some drawbacks of such implementation of copy-constructor? Foo::Foo(const Foo& i_foo) { *this = i_foo; } As I remember, it was recommend in some book to call copy constructor from assignment operator and use well-known swap trick, but I don't remember, why...

    Read the article

  • Problem with document.location.href

    - by novellino
    Hello, I am new to Javascript and Web development and I have a question regarding the document.location.href. I am using a cookie for storing the language the user prefers and then load the english or the swedish version depending on the language. The default language in the beginning is the same as the browser's language, and my index.jsp is the swedish one. The first time everything works fine. The problem is when the cookie exists already. The basic code is: if (language!=null && language!=""){ if (language=="en-US" || language=="en-us") document.location.href = "en/index.jsp"; } else{ //Explorer if (navigator.userLanguage) language = navigator.userLanguage; //other browsers else language = (navigator.language) ? navigator.language : navigator.userLanguage; if (language!=null && language!=""){ setCookie('language', language, 365, '/', 'onCheck'); if (language=="en-US" || language=="en-us") document.location.href = "en/index.jsp"; else if(language=="sv") document.location.href="index.jsp"; } } When the cookie exists we enter the first "if", and there, if the language is swedish it opens the default blabla/index.jsp page. When the language is set to engish it should open the blabla/en/index.jsp but instead it opens the blabla/en/en/index.jsp which of course is wrong. Does anyone know what I am doing wrong?? Thanks

    Read the article

  • C++ forward declaration problem

    - by Thomas
    Hi, I have a header file that has some forward declarations but when I include the header file in the implementation file it gets included after the includes for the previous forward declarations and this results in an error like this. error: using typedef-name ‘std::ifstream’ after ‘class’ /usr/include/c++/4.2.1/iosfwd:145: error: ‘std::ifstream’ has a previous declaration. Whats the norm for working around this? Thanks in advance.

    Read the article

  • Who calls the Destructor of the class when operator delete is used in multiple inheritance.

    - by dicaprio-leonard
    This question may sound too silly, however , I don't find concrete answer any where else. With little knowledge on how late binding works and virtual keyword used in inheritance. As in the code sample, when in case of inheritance where a base class pointer pointing to a derived class object created on heap and delete operator is used to deallocate the memory , the destructor of the of the derived and base will be called in order only when the base destructor is declared virtual function. Now my question is : 1) When the destructor of base is not virtual, why the problem of not calling derived dtor occur only when in case of using "delete" operator , why not in the case given below: derived drvd; base *bPtr; bPtr = &drvd; //DTOR called in proper order when goes out of scope. 2) When "delete" operator is used, who is reponsible to call the destructor of the class? The operator delete will have an implementation to call the DTOR ? or complier writes some extra stuff ? If the operator has the implementation then how does it looks like , [I need sample code how this would have been implemented]. 3) If virtual keyword is used in this example, how does operator delete now know which DTOR to call? Fundamentaly i want to know who calls the dtor of the class when delete is used. Sample Code class base { public: base() { cout<<"Base CTOR called"<<endl; } virtual ~base() { cout<<"Base DTOR called"<<endl; } }; class derived:public base { public: derived() { cout<<"Derived CTOR called"<<endl; } ~derived() { cout<<"Derived DTOR called"<<endl; } }; I'm not sure if this is a duplicate, I couldn't find in search. int main() { base *bPtr = new derived(); delete bPtr;// only when you explicitly try to delete an object return 0; }

    Read the article

  • C# Select clause returns system exception instead of relevant object

    - by Kashif
    I am trying to use the select clause to pick out an object which matches a specified name field from a database query as follows: objectQuery = from obj in objectList where obj.Equals(objectName) select obj; In the results view of my query, I get: base {System.SystemException} = {"Boolean Equals(System.Object)"} Where I should be expecting something like a Car, Make, or Model Would someone please explain what I am doing wrong here? The method in question can be seen here: // this function searches the database's table for a single object that matches the 'Name' property with 'objectName' public static T Read<T>(string objectName) where T : IEquatable<T> { using (ISession session = NHibernateHelper.OpenSession()) { IQueryable<T> objectList = session.Query<T>(); // pull (query) all the objects from the table in the database int count = objectList.Count(); // return the number of objects in the table // alternative: int count = makeList.Count<T>(); IQueryable<T> objectQuery = null; // create a reference for our queryable list of objects T foundObject = default(T); // create an object reference for our found object if (count > 0) { // give me all objects that have a name that matches 'objectName' and store them in 'objectQuery' objectQuery = from obj in objectList where obj.Equals(objectName) select obj; // make sure that 'objectQuery' has only one object in it try { foundObject = (T)objectQuery.Single(); } catch { return default(T); } // output some information to the console (output screen) Console.WriteLine("Read Make: " + foundObject.ToString()); } // pass the reference of the found object on to whoever asked for it return foundObject; } } Note that I am using the interface "IQuatable<T>" in my method descriptor. An example of the classes I am trying to pull from the database is: public class Make: IEquatable<Make> { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Model> Models { get; set; } public Make() { // this public no-argument constructor is required for NHibernate } public Make(string makeName) { this.Name = makeName; } public override string ToString() { return Name; } // Implementation of IEquatable<T> interface public virtual bool Equals(Make make) { if (this.Id == make.Id) { return true; } else { return false; } } // Implementation of IEquatable<T> interface public virtual bool Equals(String name) { if (this.Name.Equals(name)) { return true; } else { return false; } } } And the interface is described simply as: public interface IEquatable<T> { bool Equals(T obj); }

    Read the article

  • How do I set up routes to enable me to call different actions on the same controller?

    - by Remnant
    I am building my first asp.net mvc application for learning and development purposes and have come across an issue that I'd like some guidance with. Suppose I have a controller with two actions as follows: public class MyController : Controller { public ActionResult Index() { dbData = GetData("DefaultParameter") return View(dbData); } public ActionResult UpdateView(string dbParameter) { dbData = GetData("dbParameter"); return View(dbData); } } On my webpage I have the following: <% using (Html.BeginForm("UpdateView", "MyController")) %> <% { %> <div class="dropdown"><%=Html.DropDownList("Selection", Model.List, new { onchange="this.form.submit();" })%></div> <% } %> I have the following route in Global.asax: public static void RegisterRoutes(RouteCollection routes) { routes.MapRoute("Default", "{controller}", new { controller = "MyController", action = "Index"}); } The issue I am having is as follows: When I use the dropdownlist I get an error saying that /MyController/UpdateView could not be found. It therefore seems that I need to add an additional route as follows: routes.MapRoute("Default", "{controller}", new { controller = "MyController", action = "UpdateView"}); However, this causes two new issues for me: Due to the hierachy within the routing list, the similarity of the routes means that the one that appears first in the list is always executed. I don't see why I need to create another route for UpdateView. All I want to do is to retrieve new data from the database and update the view. I don't see what this has to do with the URL schema. It feels like I have gone down the wrong track here and I have missed something quite fundamental?

    Read the article

  • JBoss deployment throws 'java.util.zip.ZipException: error in opening zip file' on Linux?

    - by Kaushalya
    I thought of posting both the question and the answer for others' knowledge. I deployed a large EAR (contained more than ~1024 jars/wars) on JBoss running with Java 6 on Linux, and the deployment process cried throwing the following exception: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file) at org.jboss.deployment.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:53) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:901) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:895) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:809) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) .... Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.util.file.JarArchiveBrowser.<init>(JarArchiveBrowser.java:74) at org.jboss.util.file.FileProtocolArchiveBrowserFactory.create(FileProtocolArchiveBrowserFactory.java:48) at org.jboss.util.file.ArchiveBrowser.getBrowser(ArchiveBrowser.java:57) at org.jboss.ejb3.EJB3Deployer.hasEjbAnnotation(EJB3Deployer.java:213) .... This was caused by the 'limit of number of open file descriptors' of Linux/Unix operating systems. The default is 1024. You can check the default value using: ulimit -n To increase the number of open file descriptors (say to 2048): ulimit -n 2048 Check the man page of ulimit for more details.

    Read the article

  • Precision of Interval for PL/SQL Function value

    - by Gary
    Generally, when you specify a function the scale/precision/size of the return datatype is undefined. For example, you say FUNCTION show_price RETURN NUMBER or FUNCTION show_name RETURN VARCHAR2. You are not allowed to have FUNCTION show_price RETURN NUMBER(10,2) or FUNCTION show_name RETURN VARCHAR2(20), and the function return value is unrestricted. This is documented functionality. Now, I get an precision error (ORA-01873) if I push 9999 hours (about 400 days) into the following. The limit is because the default days precision is 2 DECLARE v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return INTERVAL DAY TO SECOND IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / and it won't allow the precision to be specified directly as part of the datatype returned by the function. DECLARE v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return INTERVAL DAY (4) TO SECOND IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / I can use a SUBTYPE DECLARE subtype t_int is INTERVAL DAY (4) TO SECOND(0); v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return t_int IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / Any drawbacks to the subtype approach ? Any alternatives (eg some place to change a default precision) ? Working with 10gR2.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >