Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 96/2372 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • How to improve Visual C++ compilation times?

    - by dtrosset
    I am compiling 2 C++ projects in a buildbot, on each commit. Both are around 1000 files, one is 100 kloc, the other 170 kloc. Compilation times are very different from gcc (4.4) to Visual C++ (2008). Visual C++ compilations for one project take in the 20 minutes. They cannot take advantage of the multiple cores because a project depend on the other. In the end, a full compilation of both projects in Debug and Release, in 32 and 64 bits takes more than 2 1/2 hours. gcc compilations for one project take in the 4 minutes. It can be parallelized on the 4 cores and takes around 1 min 10 secs. All 8 builds for 4 versions (Debug/Release, 32/64 bits) of the 2 projects are compiled in less than 10 minutes. What is happening with Visual C++ compilation times? They are basically 5 times slower. What is the average time that can be expected to compile a C++ kloc? Mine are 7 s/kloc with vc++ and 1.4 s/kloc with gcc. Can anything be done to speed-up compilation times on Visual C++?

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • How can I control UISlider Value Changed-events frequncy?

    - by Albert
    I'm writing an iPhone app that is using two uisliders to control values that are sent using coreBluetooth. If I move the sliders quickly one value freezes at the receiver, presumably because the Value Changed events trigger so often that the write-commands stack up and eventually get thrown away. How can I make sure the events don't trigger too often? Edit: Here is a clarification of the problem; the bluetooth connection sends commands every 105ms. If the user generates a bunch of events during that time they seem to que up. I would like to throw away any values generated between the connection events and just send one every 105ms. This is basically what I'm doing right now: -(IBAction) sliderChanged:(UISlider *)sender{ static int8_t value = 0; int8_t new_value = (int8_t)sender.value; if ( new_value > value + threshold || new_value < value - threshold ) { value = new_value; [btDevice writeValue:value]; } } What I'm asking is how to implement something like -(IBAction) sliderChanged:(UISlider *)sender{ static int8_t value = 0; if (105msHasPassed) { int8_t new_value = (int8_t)sender.value; if ( new_value > value + threshold || new_value < value - threshold ) { value = new_value; [btDevice writeValue:value]; } } }

    Read the article

  • Wrong logic in If Statement?

    - by Charles
    $repeat_times = mysql_real_escape_string($repeat_times); $result = mysql_query("SELECT `code`,`datetime` FROM `fc` ORDER by datetime desc LIMIT 25") or die(mysql_error()); $output .=""; $seconds = time() - strtotime($fetch_array["datetime"]); if($seconds < 60) $interval = "$seconds seconds"; else if($seconds < 3600) $interval = floor($seconds / 60) . " minutes"; else if($seconds < 86400) $interval = floor($seconds / 3600) . " hours"; else $interval = floor($seconds / 86400) . " days"; while ($fetch_array = mysql_fetch_array($result)) { $fetch_array["code"] = htmlentities($fetch_array["code"]); $output .= "<li><a href=\"http://www.***.com/code=" . htmlspecialchars(urlencode($fetch_array["code"])) . "\" target=\"_blank\">" . htmlspecialchars($fetch_array["code"]) . "</a> (" . $interval . ") ago.</li>"; } $output .=""; return $output; Why is this returning janice (14461 days) instead of janice (15 minutes ago) The datetime function table has the DATETIME type in my table so it's returning a full string for the date.

    Read the article

  • Page_Load or Page_Init

    - by balexandre
    Let's take a really simple example on using jQuery to ajaxify our page... $.load("getOrders.aspx", {limit: 25}, function(data) { // info as JSON is available in the data variable }); and in the ASP.NET (HTML part) page (only one line) <%@ Page Language="C#" AutoEventWireup="true" CodeFile="getOrders.aspx.cs" Inherits="getOrders" %> and in the ASP.NET (Code Behind) page public partial class getOrders : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string lmt = Request["limit"]; List<Orders> ords = dll.GetOrders(limit); WriteOutput( Newtonsoft.Json.JsonConvert.SerializeObject(ords) ); } private void WriteOutput(string s) { Response.Clear(); Response.Write(s); Response.Flush(); Response.End(); } } my question is Should it be protected void Page_Load(object sender, EventArgs e) or protected void Page_Init(object sender, EventArgs e) So we can save some milliseconds as we don't actually need to process the events for the page, or will Page_Init lack of some sorting of a method by the time it is called? P.S. Currently works fine in both methods, but I just want to understand the ins and outs of choosing one method over the other

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Getting real-time market/stock quotes in C#/Java

    - by David Menard
    Hey, I would like to make an program that acts like a big filter for stocks. To do so, I need to have real-time (or delayed) quotes from the market. I started getting stock quotes by requesting pages from yahoo, accordingand parsing the html to the ticker, and parsing the html. I was wondering how to do this requesting and parsing html. Is there some way I can request only the stock quotes and its info? I know some applications do this, and I am very curious how they do it, because requesting web pages and parsing them is very time-consuming. Thanks, Dave

    Read the article

  • Changing PerformancePoint Time Intelligence Post Formula Default Value

    - by Gabriel Guimarães
    Is there a way to change the default value of the Time Intelligence Post Formula?? I have a Dashboard where I use From Date and To Date filters, to calculate the values between the user selected period, for those filters I use the Time Intelligence Post Formula,however the default when the user enters the page is always today's date on both filters. (and from today to today analysis doesn't make sense for this) What I would like to to is have a From Date be something like 30 days before today's date, but not forced on the report receiving the filters, I just want the filter to have a default value and then let the user choose whatever he wants. Anybody knows of something that can be done, or this just can't be done?

    Read the article

  • Google App Engine - Figure out the time the current request reached the app engine

    - by Spines
    Is there a way I can figure out the time the current request reached the app engine? For example a user might make a request to my app, but due to app engine latencies my code might not start handling the request until one second later, is there a way I can figure out that the user has already had to wait one second? The reason I want to know how to do this is because I want to do different things based on if the user already had to wait. If the user already had to wait a significant amount of time I will just serve them a page out of the cache, if the user hasn't had to wait yet then I will serve them a page which takes a while to render.

    Read the article

  • C# Audio - How to time stretch (different tempo, same pitch)

    - by heath
    I'm trying to make a winform app in C# (VS2008) that can load an mp3 (other formats would be nice, but mp3 at a minimum) and be able to adjust the playback speed (tempo) without affecting pitch. I really don't need any other audio effects. I tried using DirectShow but that doesn't seem to offer time stretch capabilities. I was able to incorporate irrklang but that does not seem to have the time stretch capability either. So now I've moved on to SoundTouch. That certainly has the capabilities but I'm very unclear on how to implement in C#. After a few days of this, about all I've accomplished is using DLLImport on the SoundTouch DLL and am able to successfully retrieve a version number. At this point, I'm not even sure if I can do what I'm trying to do with SoundTouch. Can anyone offer some guidance either on how to implement SoundTouch or a different library with the capabilities that I'm looking for? Thank you.

    Read the article

  • MSSQL EXPRESS 2008 Stored Procedure execution time spikes periodically

    - by user156241
    I have a big stored procedure on a MSSQL 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too. It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences. I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server. Also, The recovery options for all the databases are set to "Simple".

    Read the article

  • Log response-time in restlet-based webservice

    - by amarillion
    What is the simplest way to log the response-time for a restlet-based webservice? I want to make sure that our webservice has a reasonable response time. So I want to be able to keep an eye on response times, and do something about the requests that take too long. The closest thing I could find is this recipe: http://www.naviquan.com/blog/restlet-cookbook-log, it explains how to change the log format. But there doesn't seem to be a parameter for response times, so probably a completely different approach is needed.

    Read the article

  • Traveling Salesman in polynomial time

    - by Andres
    Problem Description: Write a program in any language with the shortest number of characters that solves the following problem: Given a collection of cities and the cost of travel between each pair of them, find the cheapest way of visiting all of the cities and returning to your starting point. All solutions must take at most polynomial time. Input will be in the form of a text file named simply "i". The text file will contain data in the following format: city1# city2# cost$ cityA# cityB# cost$ ... Each element in the file is separated with a space, and there is a newline at the end of every line. Code count does not include whitespace. Solutions that take longer than polynomial time to find will not be accepted

    Read the article

  • Salesforce/PHP - Bulk Outbound message (SOAP), Time out issue

    - by Phill Pafford
    Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error? I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side: set_time_limit(0); // in the script max_execution_time = 360 ; Maximum execution time of each script, in seconds max_input_time = 360 ; Maximum amount of time each script may spend parsing request data memory_limit = 32M ; Maximum amount of memory a script may consume I used the high settings just for testing. Any thoughts as to why this is failing the ACK delivery back to Salesforce? Here is some of the code: This is how I accept and send the ACK file for the imcoming SOAP request $data = 'php://input'; $content = file_get_contents($data); if($content) { respond('true'); } else { respond('false'); } The respond function function respond($tf) { $ACK = <<<ACK <?xml version = "1.0" encoding = "utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <notifications xmlns="http://soap.sforce.com/2005/09/outbound"> <Ack>$tf</Ack> </notifications> </soapenv:Body> </soapenv:Envelope> ACK; print trim($ACK); } These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds. Could it be Apache? PHP? I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things. I show no errors in the apache log, the script runs fine. Thanks for any insight into this, --Phill

    Read the article

  • Time and date dimension in data warehouse

    - by peperg
    I'm buildind an data warehouse. Each fact has it's timestamp. I need to create reports by day, month, quater but by hours too. Loking at the examples I see that dates tend to be saved in dimension tabels. But I think, that it makes no sense for time. The dimension table would grow and grow. On the other hand JOIN with date dimension table is more efficent than using date/time functions in SQL. What are your opinions/solutions ? (I'm using Infobright)

    Read the article

  • Servlet stops working on Tomcat server after some hits or time

    - by nekin
    I have a very strange issue with some of my servlets. Below is my configuration: Folder A has X number of servlets deployed in Tomcat directory Folder B has Y number of servlets deployed in Tomcat directory After certain amount of time or hits to any of the servlets in Folder B, it stops working properly, whereas at same time all servlets of Folder A works fine. I am not able to trace where I am doing mistake. All coding for both folder's servlets is the same, the only difference is they are interacting with different DB's, but it is very simple read only operation with DB though. Any ideas?

    Read the article

  • WPF - Have a list source defined at runtime but still have sample data for design time

    - by Vaccano
    I have some ListBoxes in my WPF app. I would like to be able to view how the design looks with out having to run the app. But I still want to be able to bind to ItemsSource to my View Model. I know I saw a blog post on how to do this, but I cannot seem to find it now. To reiterate, I want dummy data at design time, but real data at run time and not break the MVVM pattern. Any ideas?

    Read the article

  • Java type for date/time when using Oracle Date with Hibernate

    - by Marcus
    We have a Oracle Date column. At first in our Java/Hibernate class we were using java.sql.Date. This worked but it didn't seem to store any time information in the database when we save so I changed the Java data type to Timestamp. Now we get this error: springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.an notation.PersistenceExceptionTranslationPostProcessor#0' defined in class path resource [margin-service-domain -config.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreatio nException: Error creating bean with name 'sessionFactory' defined in class path resource [m-service-doma in-config.xml]: Invocation of init method failed; nested exception is org.hibernate.HibernateException: Wrong column type: CREATE_TS, expected: timestamp Any ideas on how to map an Oracle Date while retaining the time portion? Update: I can get it to work if I use the Oracle Timestamp data type but I don't want that level of precision ideally. Just want the basic Oracle Date.

    Read the article

  • Best Data Structure For Time Series Data

    - by TriParkinson
    Hi all, I wonder if someone could take a minute out of their day to give their two cents on my problem. I would like some suggestions on what would be the best data structure for representing, on disk, a large data set of time series data. The main priority is speed of insertion, with other priorities in decreasing order; speed of retrieval, size on disk, size in memory, speed of removal. I have seen that B+ trees are often used in database because of their fast search times, but how about for fast insertion times? Is a linked list really the way to go? Thanks in advance for your time, Tri

    Read the article

  • DateTime: Require the user to enter a time component

    - by Heinzi
    Checking if a user input is a valid date or a valid "date + time" is easy: .NET provides DateTime.TryParse (and, in addition, VB.NET provides IsDate). Now, I want to check if the user entered a date including a time component. So, when using a German locale, 31.12.2010 00:00 should be OK, but 31.12.2010 shouldn't. I know I could use DateTime.TryParseExact like this: Dim formats() As String = {"d.M.yyyy H:mm:ss", "dd.M.yyyy H:mm:ss", _ "d.MM.yyyy H:mm:ss", "d.MM.yyyy H:mm:ss", _ "d.M.yyyy H:mm", ...} Dim result = DateTime.TryParseExact(userInput, formats, _ Globalization.CultureInfo.CurrentCulture, ..., result) but then I would hard-code the German format of specifying dates (day dot month dot year), which is considered bad practice and will make trouble should we ever want to localize our application. In addition, formats would be quite a large list of all possible combinations (one digit, two digits, ...). Is there a more elegant solution?

    Read the article

  • RockBand-like voice app for PC/OSX / Real time pitch display software

    - by Sai Emrys
    I played Rock Band 2 for the first time a little while ago (at Notacon). One thing I enjoyed about it was getting real-time feedback about my singing. I think it'd be neat to have something like that to run alongside my usual music, so that I can sing to random stuff in my music collection and know when I'm hitting the notes. Is there something like this for PC - ideally for OSX, and ideally that can just operate on arbitrary songs? I don't really care if it's game-like (though that's neat too); I just want it for the singing feedback. And I have no need for pitch correction - ideally what I'd see is just the pitches of the notes in the music and (on the same scale, differently displayed) of the live microphone. I tried to STFW but got no salient hits. :-/ Thanks!

    Read the article

  • Not getting correct time from UniversalDateTime

    - by Nakul Chaudhary
    I send a vcal through mail in web application with convert datetime to universal datetime. If i run web application locally (local sevser in India) i get correct time in my vcal. But run appication live (server in US) then not get corret time with a difference of 1 and half hour.Please suggest me. code : Dim result As StringBuilder = New StringBuilder() result.AppendFormat("BEGIN:VCALENDAR{0}", System.Environment.NewLine) result.AppendFormat("BEGIN:VEVENT{0}", System.Environment.NewLine) result.AppendFormat("SUMMARY:{0}{1}", subject, System.Environment.NewLine) result.AppendFormat("LOCATION:{0}{1}", location, System.Environment.NewLine) result.AppendFormat("DTSTART:{0}{1}", startDate.ToUniversalTime().ToString("yyyyMMdd\THHmmss\Z"), System.Environment.NewLine) result.AppendFormat("DTEND:{0}{1}", endDate.ToUniversalTime().ToString("yyyyMMdd\THHmmss\Z"), System.Environment.NewLine) result.AppendFormat("DTSTAMP:{0}{1}", DateTime.Now.ToUniversalTime().ToString("yyyyMMdd\THHmmss\Z"), System.Environment.NewLine) result.AppendFormat("DESCRIPTION:{0}{1}", description, System.Environment.NewLine) result.AppendFormat("END:VEVENT{0}", System.Environment.NewLine) result.AppendFormat("END:VCALENDAR{0}", System.Environment.NewLine) Return result.ToString()

    Read the article

  • wpf: design time error while writing Nested type in xaml

    - by viky
    I have created a usercontrol which accept type of enum and assign the values of that enum to a ComboBox control in that usercontrol. Very Simple. I am using this user control in DataTemplates. Problem comes when there comes nested type. I assign that using this notation EnumType="{x:Type myNamespace:ParentType + NestedType}" It works fine at runtime. but at design time it throws error saying Could not create an instance of type 'TypeExtension' Why? Due to this I am not able to see my window at design time. Any help?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >