Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 224/708 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • OpenSocial create activity from submit click

    - by russp
    Hi I'm "playing with OpsnSocial" and think I get a lot of it (well thanks to Googles' bits) but one question if I may. Creating an activity Lets say I have a form like this (simple) <form> <input type="text" name="" id="testinput" value=""/> <input type="submit" name="" id="" value=""/> </form> And I want to post the value of the text field (and or a message i.e "just posted" to the "users" activity. Do I use a function like this? function createActivity() { if (viewer) { var activity = opensocial.newActivity({ title: viewer.getDisplayName() + ' VALUE FROM FORM '}); opensocial.requestCreateActivity(activity, "HIGH", function() { setTimeout(initAllData,1000); }); } }; If so, how do I pass the text field value to it - is it something like this? var testinput = document.getElementById("testinput"); so the function may look like function createActivity() { if (viewer) { var activity = opensocial.newActivity({ title: viewer.getDisplayName() + testinput }); opensocial.requestCreateActivity(activity, "HIGH", function() { setTimeout(initAllData,1000); }); } }; And how do I trigger the function by using the submit button. In my basic JQuery I would use $('#submitID').submit(function(){ 'bits in here '}); Is at "simple as that i.e. use the createActivity function and it will use the OS framework to "post" to the activity.xml

    Read the article

  • Multidimensional array (parent and childs)

    - by Juan
    I have a category system in a MySQL database with parents and childs. The database only stores the id of it''s immediate parent (or 0 if on root). Since the system allows multiple subcategories there are cases of multiple childs. For example [98] Storage [1] External [3] Pendrives [4] Portable hhdds [2] Internal [5] Sata hhdd [6] IDE hhdd [...] [99] Clothing The database would be id parent_id name 1 98 External 2 98 Internal 3 1 Pendrives 4 1 Portable 5 2 Sata 6 2 IDE 98 0 Storage 99 0 Clothing I also have a products table with a category id and I need to get a list of all the products in the first level of categories. For example: Product Category A 3 B 4 C 5 D 6 E 74 Should return 98: A, B, C, D 99: X, Y, Z... I'm stuck and I can't think of the logic to retrieve it in that way. I started by getting the IDs of all the categories that aren't in the first level by: while ($row = mysql_fetch_assoc($result)) { if ($row['parent_id'] != 0) { $level1[$i]['name'] = utf8_encode($row['categories_name']); $level1[$i]['id'] = $row['categories_id']; } $i++; } but I'm having a burnout and can't think of a way that would nest them. I thought some kind of while but it's infinite :P Any ideas please?

    Read the article

  • select all values from a dimension for which there are facts in all other dimensions

    - by ideasculptor
    I've tried to simplify for the purposes of asking this question. Hopefully, this will be comprehensible. Basically, I have a fact table with a time dimension, another dimension, and a hierarchical dimension. For the purposes of the question, let's assume the hierarchical dimension is zip code and state. The other dimension is just descriptive. Let's call it 'customer' Let's assume there are 50 customers. I need to find the set of states for which there is at least one zip code in which EVERY customer has at least one fact row for each day in the time dimension. If a zip code has only 49 customers, I don't care about it. If even one of the 50 customers doesn't have a value for even 1 day in a zip code, I don't care about it. Finally, I also need to know which zip codes qualified the state for selection. Note, there is no requirement that every zip code have a full data set - only that at least one zip code does. I don't mind making multiple queries and doing some processing on the client side. This is a dataset that only needs to be generated once per day and can be cached. I don't even see a particularly clean way to do it with multiple queries short of simply brute-force iteration, and there are a heck of a lot of 'zip codes' in the data set (not actually zip codes, but the there are approximately 100,000 entries in the lower level of the hierarchy and several hundred in the top level, so zipcode-state is a reasonable analogy)

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • Mule ESB 3.2 Splitter destroys Enricher results

    - by Eddie
    Here is the snippet of my flow: <logger message="PRODUCT_ID = #[header:productID]" level="INFO" doc:name="Logger"/> <splitter evaluator="jxpath" expression="//*/BisacHeaderCodes" doc:name="Splitter"/> <logger message="PRODUCT_ID_POST_SPLITTER = #[header:productID]" level="INFO" doc:name="Logger"/> #[header:productID] was set up prior to Logger call. I tried #[variable:productID] and got the same result. When I run it, this is the out put I get: INFO 2012-04-05 23:12:47,865 [[bookinista_order_management].connector.http.mule.default.receiver.02] org.mule.api.processor.LoggerMessageProcessor: PRODUCT_ID = 72 ERROR 2012-04-05 23:12:47,871 [[bookinista_order_management].connector.http.mule.default.receiver.02] org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: Expression Evaluator "header" with expression "outbound:productID" returned null but a value was required. org.mule.api.expression.RequiredValueException: Expression Evaluator "header" with expression "outbound:productID" returned null but a value was required. So, right before Splitter, I have a perfect value in my header, and right after Splitter, that value disappears! I understand that Splitter propagates only part of payloda, but shouldn't it leave headers and variables alone? Any ideas for a workaround?

    Read the article

  • What's a good way to provide additional decoration/metadata for Python function parameters?

    - by Will Dean
    We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment. We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments. So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times. We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function. However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description. So, the question is, what are good 'pythonic' ways of adding this sort of information to a function. The two possibilities I have thought of are: Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec) Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata. Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.

    Read the article

  • Custom UIProgressView drawing weirdness

    - by Werner
    I am trying to create my own custom UIProgressView by subclassing it and then overwrite the drawRect function. Everything works as expected except the progress filling bar. I can't get the height and image right. The images are both in Retina resolution and the Simulator is in Retina mode. The images are called: "[email protected]" (28px high) and "[email protected]" (32px high). CustomProgressView.h #import <UIKit/UIKit.h> @interface CustomProgressView : UIProgressView @end CustomProgressView.m #import "CustomProgressView.h" @implementation CustomProgressView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code self.frame = CGRectMake(self.frame.origin.x, self.frame.origin.y, self.frame.size.width, 16); UIImage *progressBarTrack = [[UIImage imageNamed:@"progressBarTrack"] resizableImageWithCapInsets:UIEdgeInsetsZero]; UIImage *progressBar = [[UIImage imageNamed:@"progressBar"] resizableImageWithCapInsets:UIEdgeInsetsMake(4, 4, 5, 4)]; [progressBarTrack drawInRect:rect]; NSInteger maximumWidth = rect.size.width - 2; NSInteger currentWidth = floor([self progress] * maximumWidth); CGRect fillRect = CGRectMake(rect.origin.x + 1, rect.origin.y + 1, currentWidth, 14); [progressBar drawInRect:fillRect]; } @end The resulting ProgressView has the right height and width. It also fills at the right percentage (currently set at 80%). But the progress fill image isn't drawn correctly. Does anyone see where I go wrong?

    Read the article

  • Which web framework or technologies would suit me?

    - by Suraj Chandran
    Hi, I had been working on desktop apps and server side(non web) for some time and now I am diving in to web first time. I plan to write a scalable enterprise level app. I have worked with Java, Javascript, Jquery etc. but I absolutely hate jsp. So is there any framework that focuses on developing enterprise level web apps without jsp. I liked Wicket's approach, but I think there is a little lack of support of dynamic html in it and jquery(yes i looked at wiquery). Also I feel making wicket apps scalable would take some sweat. Can Spring MVC, Struts2 etc. help me make with this with just using say Java, JavaScript, and JQuery. Or are there any other options for me like Wicket. Please do forgive if anything above looks insane, I am still working on my understanding with enterprise web apps. NOTE: If you think that I should take a different direction or approach, please do suggest!

    Read the article

  • How to update a Widget dynamically (Not waiting 30 min for onUpdate to be called)?

    - by Donal Rafferty
    I am currently learning about widgets in Android. I want to create a WIFI widget that will display the SSID, the RSSI (Signal) level. But I also want to be able to send it data from a service I am running that calculates the Quality of Sound over wifi. Here is what I have after some reading and a quick tutorial: public class WlanWidget extends AppWidgetProvider{ RemoteViews remoteViews; AppWidgetManager appWidgetManager; ComponentName thisWidget; WifiManager wifiManager; public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { Timer timer = new Timer(); timer.scheduleAtFixedRate(new WlanTimer(context, appWidgetManager), 1, 10000); } private class WlanTimer extends TimerTask{ RemoteViews remoteViews; AppWidgetManager appWidgetManager; ComponentName thisWidget; public WlanTimer(Context context, AppWidgetManager appWidgetManager) { this.appWidgetManager = appWidgetManager; remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); thisWidget = new ComponentName(context, WlanWidget.class); wifiManager = (WifiManager)context.getSystemService(Context.WIFI_SERVICE); } @Override public void run() { remoteViews.setTextViewText(R.id.widget_textview, wifiManager.getConnectionInfo().getSSID()); appWidgetManager.updateAppWidget(thisWidget, remoteViews); } } The above seems to work ok, it updates the SSID on the widget every 10 seconds. However what is the most efficent way to get the information from my service that will be already running to update periodically on my widget? Also is there a better approach to updating the the widget rather than using a timer and timertask? (Avoid polling) UPDATE As per Karan's suggestion I have added the following code in my Service: RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); ComponentName thisWidget = new ComponentName( context, WlanWidget.class ); remoteViews.setTextViewText(R.id.widget_QCLevel, " " + qcPercentage); AppWidgetManager.getInstance( context ).updateAppWidget( thisWidget, remoteViews ); This gets run everytime the RSSI level changes but it still never updates the TextView on my widget, any ideas why?

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • design an extendable database model

    - by wishi_
    Hi! Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability". My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level. When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application. Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.

    Read the article

  • Managing project configurations in VS 2010

    - by Toby
    I'm working on a solution with multiple projects (class libraries, interop, web application, etc) in VS2010. For the web application, I would like to take advantage of the config transformations in VS2010, so at one point I added configurations for each of our environments: Development, Test, Production, and so on. Some time later, after having rearranged the project layout, I noticed that some projects show all of the configurations in the properties page dropdown. Some projects (added since I did that setup) show only the standard Debug & Release configurations. Once I realized that this was going to make build configurations worse, not better, I decided to remove all of the extra configurations I had added. I've removed all of the various configuration options from the solution, but the projects that had the alternate configuration options still have them, and I can't figure out how to get rid of them in individual projects. Also, now that I see that not all projects have to have the same configurations, I would like to create my environmental configurations at the solution level, and in the web application project (for the config transforms), but leave all of the class libraries with the basic Debug/Release configurations. I've been unable to find any tool in the UI, or any information on the 'Net, concerning how to set up such a thing. So, in short, what's the best/easiest way to manage configurations at the project level in VS2010?

    Read the article

  • Threshold of blurry image - part 2

    - by 1''
    How can I threshold this blurry image to make the digits as clear as possible? In a previous post, I tried adaptively thresholding a blurry image (left), which resulted in distorted and disconnected digits (right): Since then, I've tried using a morphological closing operation as described in this post to make the brightness of the image uniform: If I adaptively threshold this image, I don't get significantly better results. However, because the brightness is approximately uniform, I can now use an ordinary threshold: This is a lot better than before, but I have two problems: I had to manually choose the threshold value. Although the closing operation results in uniform brightness, the level of brightness might be different for other images. Different parts of the image would do better with slight variations in the threshold level. For instance, the 9 and 7 in the top left come out partially faded and should have a lower threshold, while some of the 6s have fused into 8s and should have a higher threshold. I thought that going back to an adaptive threshold, but with a very large block size (1/9th of the image) would solve both problems. Instead, I end up with a weird "halo effect" where the centre of the image is a lot brighter, but the edges are about the same as the normally-thresholded image: Edit: remi suggested morphologically opening the thresholded image at the top right of this post. This doesn't work too well. Using elliptical kernels, only a 3x3 is small enough to avoid obliterating the image entirely, and even then there are significant breakages in the digits:

    Read the article

  • ASP.NET MVC twitter/myspace style routing

    - by Astrofaes
    Hi guys, This is my first post after being a long-time lurker - so please be gentle :-) I have a website similar to twitter, in that people can sign up and choose a 'friendly url', so on my site they would have something like: mydomain.com/benjones I also have root level static pages such as: mydomain.com/about and of course my homepage: mydomain.com/ I'm new to ASP.NET MVC 2 (in fact I just started today) and I've set up the following routes to try and achieve the above. public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("content/{*pathInfo}"); routes.IgnoreRoute("images/{*pathInfo}"); routes.MapRoute("About", "about", new { controller = "Common", action = "About" } ); // User profile sits at root level so check for this before displaying the homepage routes.MapRoute("UserProfile", "{url}", new { controller = "User", action = "Profile", url = "" } ); routes.MapRoute("Home", "", new { controller = "Home", action = "Index", id = "" } ); } For the most part this works fine, however, my homepage is not being triggered! Essentially, when you browser to mydomain.com, it seems to trigger the User Profile route with an empty {url} parameter and so the homepage is never reached! Any ideas on how I can show the homepage?

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • Silverlight Binding - Binds when item is added but doesn't get updates.

    - by dw
    Hello, I'm sorta at a loss to why this doesn't work considering I got it from working code, just added a new level of code, but here's what I have. Basically, when I bind the ViewModel to a list, the binding picks up when Items are added to a collection. However, if an update occurs to the item that is bound, it doesn't get updated. Basically, I have an ObservableCollection that contains a custom class with a string value. When that string value gets updated I need it to update the List. Right now, when I debug, the list item does get updated correctly, but the UI doesn't reflect the change. If I set the bound item to a member variable and null it out then reset it to the right collection it will work, but not desired behavior. Here is a mockup of the code, hopefully someone can tell me where I am wrong. Also, I've tried implementing INofityPropertyChanged at every level in the code below. public class Class1 { public string ItemName; } public class Class2 { private Class2 _items; private Class2() //Singleton { _items = new ObservableCollection<Class1>(); } public ObservableCollection<Class1> Items { get { return _items; } internal set { _items = value; } } } public class Class3 { private Class2 _Class2Instnace; private Class3() { _Class2Instnace = Class2.Instance; } public ObservableCollection<Class1> Items2 { get {return _Class2Instnace.Items; } } } public class MyViewModel : INofityPropertyChanged { private Class3 _myClass3; private MyViewModel() { _myClass3 = new Class3(); } private BindingItems { get { return _myClass3.Items2; } // Binds when adding items but not when a Class1.ItemName gets updated. } }

    Read the article

  • Web Applications Development: Security practices for Application design

    - by Shyam
    Hi, As I am creating more web applications that are targeted for multiple users, I figured out that I have to start thinking about user management and security. At a glance and in my ideal world, all users belong to a group. Permissions and access is thus defined per group (and inherited by the users of that group). Logically, I have my group of administrators, which are identified with a level "7" (integer) clearance. A group of webusers have for example level "1". This in generally all works great for me, but I need some kind of list that I have to keep in mind how I secure my system, and some general practices. I am not looking for a specific environment; I want to learn the why's and how's. An example is privilege escalation. If someone would be able to "push" themselves inside a group with higher privileges, for example the Administration, how can I prevent this, or what measures should I take to have some sort of precaution? I don't like in that case to walk into a caveat. My question is basically: where can I find a good resource, list, policy, book that explains the security of web applications, the why's, the how's and readable if you don't have any experience in the realm of advanced security? I prefer a free resource, as I believe I couldn't be the first one who thought about this. Thank you for your answers, comments and feedback.

    Read the article

  • Linq to SQL duplicating entry when referencing FK

    - by Oscar
    Hi! I am still facing some problems when using LINQ-to-SQL. I am also looking for answers by myself, but this problem is so akward that I am having problems to find the right keywords to look for it. I have this code here: public CustomTask SaveTask(string token, CustomTask task) { TrackingDataContext dataConext = new TrackingDataContext(); //Check the token for security if (SessionTokenBase.Instance.ExistsToken(Convert.ToInt32(token)) == null) return null; //Populates the Task - the "real" Linq to SQL object Task t = new Task(); t.Title = task.Title; t.Description = task.Description; //****The next 4 lines are important**** if (task.Severity != null) t.Severity = task.Severity; else t.SeverityID = task.SeverityID; t.StateID = task.StateID; if (task.TeamMember != null) t.TeamMember = task.TeamMember; else t.ReporterID = task.ReporterID; if (task.ReporterTeam != null) t.Team = task.ReporterTeam; else t.ReporterTeamID = task.ReporterTeamID; //Saves/Updates the task dataConext.Tasks.InsertOnSubmit(t); dataConext.SubmitChanges(); task.ID = t.ID; return task; } The problem is that I am sending the ID of the severity, and then, when I get this situation: DB State before calling the method: ID Name 1 high 2 medium 3 low Call the method selecting "medium" as severity DB State after calling the method: ID Name 1 high 2 medium 3 low 4 medium The point is: -It identified that the ID was related to the Medium entry (and for this reason it could populate the "Name" Column correctly), but if duplicated this entry. The problem is: Why?!! Some explanation about the code: CustomTask is almost the same as Task, but I was having problems regarding serialization as can be seen here I don't want to send the Severity property populated because I want my message to be as small as possible. Could anyone clear to my, why it recognize the entry, but creates a new entry in the DB?

    Read the article

  • Data historian queries

    - by Scott Dennis
    Hi, I have a table that contains data for electric motors the format is: DATE(DateTime) | TagName(VarChar(50) | Val(Float) | 2009-11-03 17:44:13.000 | Motor_1 | 123.45 2009-11-04 17:44:13.000 | Motor_1 | 124.45 2009-11-05 17:44:13.000 | Motor_1 | 125.45 2009-11-03 17:44:13.000 | Motor_2 | 223.45 2009-11-04 17:44:13.000 | Motor_2 | 224.45 Data for each motor is inserted daily, so there would be 31 Motor_1s and 31 Motor_2s etc. We do this so we can trend it on our control system displays. I am using views to extract last months max val and last months min val. Same for this months data. Then I join the two and calculate the difference to get the actual run hours for that month. The "Val" is a nonresetable Accumulation from a PLC(Controller). This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val > cur.Val))) This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val < cur.Val))) This is the query that calculates the difference and runs a bit slow: SELECT dbo.Motors_Last_Mon_Max.TagName, STR(dbo.Motors_Last_Mon_Max.Hours - dbo.Motors_Last_Mon_Min.Hours, 12, 2) AS Hours FROM dbo.Motors_Last_Mon_Min RIGHT OUTER JOIN dbo.Motors_Last_Mon_Max ON dbo.Motors_Last_Mon_Min.TagName = dbo.Motors_Last_Mon_Max.TagName I know there is a better way. Ultimately I just need last months total and this months total. Any help would be appreciated. Thanks in advance

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • Using CGContextDrawTiledImage at different zooms causes massive memory growth

    - by Jacques
    I'm working on app an where there's a view in a zoomable UIScrollView. When the user zooms in or out, I redraw the view that's in the UIScrollView to be nice and sharp. That view has a background image that I draw with CGContextDrawTiledImage. I noticed that memory usage grows every time I switch to a new zoom level. It looks like CGContextDrawTiledImage keeps a cache somewhere of the image scaled to different sizes. So, If I go from 1.0 to 1.1x zoom, memory use grows. Going back to 1.0 doesn't cause it to grow, but then going to 1.05 and then 1.2 causes it to grow twice. Back to 1.1 and no growth. Of course, the zoom level is under user control so I don't have control over how many zoom levels happen. Right now my background image is kind of massive (512x512), so this causes memory usage to grow very quickly. It doesn't show up as a memory leak in Instruments, just additional allocations that never get freed. I've tried to find a way to free the cache that appears to be being created, but no luck. It doesn't seem to respond to low memory warnings, for example. I also tried setting the view's backgroundColor to a UIColor created with colorWithPatternImage, but that doesn't work because I'm doing the scaling by changing the graphics context's CTM, not by setting the view's transform. Any ideas on how to keep memory usage from blowing up?

    Read the article

  • How to get around LazyInitializationException in scheduled jobs?

    - by Shreerang
    I am working on a J2EE server application which is deployed on Tomcat. I use Spring source as MVC framework and Hibernate as ORM provider. My object model has lot of Lazy relationships (dependent objects are fetched on request). The high level design is like Service level methods call a few DAO methods to perform database operation. The service method is called either from the Flex UI or as a scheduled job. When it is called from Flex UI, the service method works fine i.e. it fetches some objects using DAO methods and even Lazy loading works. This is possible by the use of OpenSessionInViewFilter configured with the UI servlet. But when the same service method is called as scheduled Job, it gives LazyInitializationException. I can not configure OpenSessionInViewFilter because there is no servlet or UI request associated with that. I tried configuring Transaction around the scheduled job method so that service method starts a transaction and all the DAO methods participate in that same transaction, hoping that the transaction will remain active and hibernate session will be available. But it does not work. Please suggest if anyone has ever been able to get such a configuration working. If needed, I can post the Hibernate configuration and log messages. Thanks a lot for help! Shreerang

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >