Search Results

Search found 78653 results on 3147 pages for 'performance object name s'.

Page 50/3147 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • PHP Value Object auto create

    - by JonoB
    Assume that I have a class value object defined in php, where each variable in the class is defined. Something like: class UserVO { public $id; public $name; } I now have a function in another class, which is expecting an array ($data). function save_user($data) { //run code to save the user } How do I tell php that the $data parameter should be typed as a UserVO? I could then have code completion to do something like: $something = $data->id; //typed as UserVO.id $else = $data->name; //typed as UserVO.name I'm guessing something like the following, but this obviously doesnt work $my_var = $data as new userVO();

    Read the article

  • Parent-Child relation while using object data source

    - by Saba
    Hello guys I am experiencing with a class generator I've written, which generates a class for each table in database with each table field as a property and such. Before that, I used to add a typed dataset to the project and add some tables to it. It automatically detected the relationship between tables and when I added a parent table as data source of a datagrid, I could add another datagrid and use the foreing key data member of it's bindingsource to fill it, and when someone moved the focus on parent datagrid, the data in child datagrid would change accordingly. Now that I have my classes, I add an object as data source for my 2 datagrids, but obviously it doesn't detect a parent child relation. But It'd really help if I could have that foreign key relation in my object datasources. Is there any way to have that relation in object datasource?

    Read the article

  • js object problem

    - by haltman
    Hi everybay! I use an object to check that a group of radio buttons have a precise value like set on "rule" object. Here is an example: arr = {a:"1", b:"1", c:"1", c:"2"}; //that's my object rule var arr2={}; //here I create my second array with charged value $("#form_cont input:checked").each(function() { arr2[$(this).attr("name")]=$(this).val(); }); //here I make the check for (k in arr2) { if (typeof arr[k] !== 'undefined' && arr[k] === arr2[k]) { $("#form_cont li[name$='"+k+"']").css('background-color', ''); } else { $("#form_cont li[name$='"+k+"']").css('background-color', 'pink'); } } The problem is when I have to check the "c" key I get last the one (2) and not the right value how that may e 1 or 2 thanks in advance ciao, h.

    Read the article

  • Angular JS pagination after data loaded

    - by Federico Bucchi
    do you have any example of Angular JS elements pagination loaded from I file? I found this example: http://jsfiddle.net/SAWsA/11/ Now, instead of having this: $scope.items = [ {"id":"1","name":"name 1","description":"description 1","field3":"field3 1","field4":"field4 1","field5 ":"field5 1"}, {"id":"2","name":"name 2","description":"description 1","field3":"field3 2","field4":"field4 2","field5 ":"field5 2"}, {"id":"3","name":"name 3","description":"description 1","field3":"field3 3","field4":"field4 3","field5 ":"field5 3"}, {"id":"4","name":"name 4","description":"description 1","field3":"field3 4","field4":"field4 4","field5 ":"field5 4"}, {"id":"5","name":"name 5","description":"description 1","field3":"field3 5","field4":"field4 5","field5 ":"field5 5"}, {"id":"6","name":"name 6","description":"description 1","field3":"field3 6","field4":"field4 6","field5 ":"field5 6"}, {"id":"7","name":"name 7","description":"description 1","field3":"field3 7","field4":"field4 7","field5 ":"field5 7"}, {"id":"8","name":"name 8","description":"description 1","field3":"field3 8","field4":"field4 8","field5 ":"field5 8"}, {"id":"9","name":"name 9","description":"description 1","field3":"field3 9","field4":"field4 9","field5 ":"field5 9"}, {"id":"10","name":"name 10","description":"description 1","field3":"field3 10","field4":"field4 10","field5 ":"field5 10"}, {"id":"11","name":"name 11","description":"description 1","field3":"field3 11","field4":"field4 11","field5 ":"field5 11"}, {"id":"12","name":"name 12","description":"description 1","field3":"field3 12","field4":"field4 12","field5 ":"field5 12"}, {"id":"13","name":"name 13","description":"description 1","field3":"field3 13","field4":"field4 13","field5 ":"field5 13"}, {"id":"14","name":"name 14","description":"description 1","field3":"field3 14","field4":"field4 14","field5 ":"field5 14"}, {"id":"15","name":"name 15","description":"description 1","field3":"field3 15","field4":"field4 15","field5 ":"field5 15"}, {"id":"16","name":"name 16","description":"description 1","field3":"field3 16","field4":"field4 16","field5 ":"field5 16"}, {"id":"17","name":"name 17","description":"description 1","field3":"field3 17","field4":"field4 17","field5 ":"field5 17"}, {"id":"18","name":"name 18","description":"description 1","field3":"field3 18","field4":"field4 18","field5 ":"field5 18"}, {"id":"19","name":"name 19","description":"description 1","field3":"field3 19","field4":"field4 19","field5 ":"field5 19"}, {"id":"20","name":"name 20","description":"description 1","field3":"field3 20","field4":"field4 20","field5 ":"field5 20"} ]; I have to use something generated by: $http.get('/json/mocks/apps/applications.json') .then(function (result) { $scope.items = result.data.applications; }); How would you create the pagination waiting for the data loaded from $http.get?

    Read the article

  • How can I find out how much memory an object (rather the instance of an object) of a C++ class consu

    - by Shadow
    Hi, I am developing a Graph-class, based on boost-graph-library. A Graph-object contains a boost-graph, so to say an adjacency_list, and a map. When monitoring the total memory usage of my program, it consumes quite a lot (checked with pmap). Now, I would like to know, how much of the memory is exactly consumed by a filled object of this Graph-class? With filled I mean when the adjacency_list is full of vertices and edges. I found out, that using sizeof() doesn't bring me far. Using valgrind is also not an alternative as there is quite some memory allocation done previously and this makes the usage of valgrind impractical for this purpose. I'm also not interested in what other parts of the program cost in memory, I want to focus on one single object. Thank you.

    Read the article

  • Calling an object method from an object property definition

    - by Ian
    I am trying to call an object method from an object (the same object) property definition to no avail. var objectName = { method : function() { return "boop"; }, property : this.method() }; In this example I want to assign the return value of objectName.method ("boop") to objectName.property. I have tried objectName.method(), method(), window.objectName.method(), along with the bracket notation variants of all those as well, ex. this["method"], with no luck.

    Read the article

  • Getting information from an XML object in PHP

    - by errata
    Hi! I am using some XML parser to get some information from API, blah blah... :) In one place in my script, I need to convert string to int but I'm not sure how... Here is my object: object(parserXMLElement)#45 (4) { ["name:private"]=> string(7) "balance" ["data:private"]=> object(SimpleXMLElement)#46 (1) { [0]=> string(12) "11426.46" } ["children:private"]=> NULL ["rows:private"]=> NULL } I need to have this string "11426.46" stored in some var as integer. When I echo $parsed->result->balance I get that string, but if I want to cast it as int, the result is: 1. Please help! Thanks a lot!

    Read the article

  • JDBC batch insert performance

    - by wo_shi_ni_ba_ba
    I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it? try { // Disable auto-commit connection.setAutoCommit(false); // Create a prepared statement String sql = "INSERT INTO mytable (xxx), VALUES(?)"; PreparedStatement pstmt = connection.prepareStatement(sql); Object[] vals=set.toArray(); for (int i=0; i

    Read the article

  • PHP array of object(stdClass) fusion/intersect?

    - by Gremo
    $arr1 is an associative array of anonymus objects: array 15898 => object(stdClass)[8] public 'date' => int $arr2 is another associative array with two (or more, it's not fixed) properties: array 15898 => object(stdClass)[10] public 'fruits' public 'drinks' I can't find any function for intersect and content fusion when dealing with objects. Basically i'd like to obtain: array 15898 => object(stdClass)[8] public 'date' => int public 'fruits' public 'drinks' Question is: is this even possible?

    Read the article

  • Is Linq Faster, Slower or the same?

    - by Vaccano
    Is this: Box boxToFind = AllBoxes.Where(box => box.BoxNumber == boxToMatchTo.BagNumber); Faster or slower than this: Box boxToFind ; foreach (Box box in AllBoxes) { if (box.BoxNumber == boxToMatchTo.BoxNumber) { boxToFind = box; } } Both give me the result I am looking for (boxToFind). This is going to run on a mobile device that I need to be performance conscientious of.

    Read the article

  • High accuracy cpu timers

    - by John Robertson
    An expert in highly optimized code once told me that an important part of his strategy was the availability of extremely high performance timers on the CPU. Does anyone know what those are and how one can access them to test various code optimizations? While I am interested regardless, I also wanted to ask whether it is possible to access them from something higher than assembly (or with only a little assembly) via visual studio C++?

    Read the article

  • ruby hash to object - Parsing data from JSON object

    - by Leddo
    Hi all, I'm just starting to dabble in consuming a JSON webservice, and I am having a little trouble working out the best way to get to the actual data elements. I am receiving a response which has been converted into a ruby hash using the JSON.parse method. The hash looks like this: {"response"=>{"code"=>2002, "payload"=>{"topic"=>[{"name"=>"Topic Name", "url"=>"http://www.something.com/topic", "hero_image"=>{"image_id"=>"05rfbwV0Nggp8", "hero_image_id"=>"0d600BZ7MZgLJ", "hero_image_url"=>"http://img.something.com/imageserve/0d600BZ7MZgLJ/60x60.jpg"}, "type"=>"PERSON", "search_score"=>10.0, "topic_id"=>"0eG10W4e3Aapo"}]}, "message"=>"Success"}} What I would like to know, is what is the easiest way to get to the "topic" data so I can do something like: topic.name = json_resp.name topic.img = jsob_resp.hero_image_url etc Many thanks for any help you can offer. Regards Chris

    Read the article

  • How to add an object into an object

    - by FlyingCat
    I am trying to add an object into another object. My previous post helped me on the array issue but I also need to add object into an object too. I have something like var temp = {}; for(var i=0; i<test.length; i++){ console.log(test[i]) console.log(product[i]) temp.test[i] =product[i]; } Both of console.log show values. However, I am getting "Uncaught TypeError: Cannot set property '0' of undefined" for the temp.test[i] =product[i] Can someone help me out on this? Thanks a lot

    Read the article

  • object consisting of jQuery element

    - by Adam Kiss
    hello, current code I've built function to do something over collection of jQuery elements: var collection = $([]); //empty collection I add them with: collection = collection.add(e); and remove with: collection = collection.not(e); It's pretty straightforward solution, works nicely. problem Now, I would like to have an object consisting of various settings set to any jQuery element, i.e.: function addObject(e){ var o = { alpha: .6 //float base: {r: 255, g: 255, b: 255} //color object } e.data('settings', o); } But when I pass jQuery object/element to function (i.e. as e), calling e.data doesn't work, although it would be simplest and really nice solution. question If I have an "collection" of jQuery elements, what is the simplest way of storing some data for each element of set?

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can I use standard tools to get the full name of a process, when its name has embedded spaces?

    - by fred.bear
    I understand that it may be a rare situation for an executable to have spaces in it, but it could happen. An example may be the best explanation.. Using standard tools, I want to determine the location (on the file system) of the executable which owns(?) the current window... get the current window ID ...(xdotool getactivewindow ) use the ID to get the PID ...(wmctrl -p -l | sed ... ID .... use the PID to get the executable's name ... (ps -A ... here is where I run into problems ! Whith ps, when listing only the executable's name (-o ucmd), it truncates the name to 15 characters, so this rules out this option for any name which is longer. Widening the column (-o ucmd:99 ) makes no difference.. If pgrep is anything to go by, its matching is limited to 15 because of stat (see: info pgrep).. Listings in variants of "full" mode (eg -A w w) are not useful when the name concerned has spaces in it, because this name is separated from its args by another space!.. Also, in "full" mode, if the process was started by a link, the name of the link is shown, rather than the executable's name. Is there some way to do this (using standard tools)? ...or are spaces a show stopper here?

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Loading PNGs into OpenGL performance issues - Java & JOGL much slower than C# & Tao.OpenGL

    - by Edward Cresswell
    I am noticing a large performance difference between Java & JOGL and C# & Tao.OpenGL when both loading PNGs from storage into memory, and when loading that BufferedImage (java) or Bitmap (C# - both are PNGs on hard drive) 'into' OpenGL. This difference is quite large, so I assumed I was doing something wrong, however after quite a lot of searching and trying different loading techniques I've been unable to reduce this difference. With Java I get an image loaded in 248ms and loaded into OpenGL in 728ms The same on C# takes 54ms to load the image, and 34ms to load/create texture. The image in question above is a PNG containing transparency, sized 7200x255, used for a 2D animated sprite. I realise the size is really quite ridiculous and am considering cutting up the sprite, however the large difference is still there (and confusing). On the Java side the code looks like this: BufferedImage image = ImageIO.read(new File(fileName)); texture = TextureIO.newTexture(image, false); texture.setTexParameteri(GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR); texture.setTexParameteri(GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR); The C# code uses: Bitmap t = new Bitmap(fileName); t.RotateFlip(RotateFlipType.RotateNoneFlipY); Rectangle r = new Rectangle(0, 0, t.Width, t.Height); BitmapData bd = t.LockBits(r, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); Gl.glBindTexture(Gl.GL_TEXTURE_2D, tID); Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA, t.Width, t.Height, 0, Gl.GL_BGRA, Gl.GL_UNSIGNED_BYTE, bd.Scan0); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR); t.UnlockBits(bd); t.Dispose(); After quite a lot of testing I can only come to the conclusion that Java/JOGL is just slower here - PNG reading might not be as quick, or that I'm still doing something wrong. Thanks. Edit2: I have found that creating a new BufferedImage with format TYPE_INT_ARGB_PRE decreases OpenGL texture load time by almost half - this includes having to create the new BufferedImage, getting the Graphics2D from it and then rendering the previously loaded image to it. Edit3: Benchmark results for 5 variations. I wrote a small benchmarking tool, the following results come from loading a set of 33 pngs, most are very wide, 5 times. testStart: ImageIO.read(file) -> TextureIO.newTexture(image) result: avg = 10250ms, total = 51251 testStart: ImageIO.read(bis) -> TextureIO.newTexture(image) result: avg = 10029ms, total = 50147 testStart: ImageIO.read(file) -> TextureIO.newTexture(argbImage) result: avg = 5343ms, total = 26717 testStart: ImageIO.read(bis) -> TextureIO.newTexture(argbImage) result: avg = 5534ms, total = 27673 testStart: TextureIO.newTexture(file) result: avg = 10395ms, total = 51979 ImageIO.read(bis) refers to the technique described in James Branigan's answer below. argbImage refers to the technique described in my previous edit: img = ImageIO.read(file); argbImg = new BufferedImage(img.getWidth(), img.getHeight(), TYPE_INT_ARGB_PRE); g = argbImg.createGraphics(); g.drawImage(img, 0, 0, null); texture = TextureIO.newTexture(argbImg, false); Any more methods of loading (either images from file, or images to OpenGL) would be appreciated, I will update these benchmarks.

    Read the article

  • Performance issues with repeatable loops as control part

    - by djerry
    Hey guys, In my application, i need to show made calls to the user. The user can arrange some filters, according to what they want to see. The problem is that i find it quite hard to filter the calls without losing performance. This is what i am using now : private void ProcessFilterChoice() { _filteredCalls = ServiceConnector.ServiceConnector.SingletonServiceConnector.Proxy.GetAllCalls().ToList(); if (cboOutgoingIncoming.SelectedIndex > -1) GetFilterPartOutgoingIncoming(); if (cboInternExtern.SelectedIndex > -1) GetFilterPartInternExtern(); if (cboDateFilter.SelectedIndex > -1) GetFilteredCallsByDate(); wbPdf.Source = null; btnPrint.Content = "Pdf preview"; } private void GetFilterPartOutgoingIncoming() { if (cboOutgoingIncoming.SelectedItem.ToString().Equals("Outgoing")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Caller.E164.Length > 4 || _filteredCalls[i].Caller.E164.Equals("0")) _filteredCalls.RemoveAt(i); } else if (cboOutgoingIncoming.SelectedItem.ToString().Equals("Incoming")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Called.E164.Length > 4 || _filteredCalls[i].Called.E164.Equals("0")) _filteredCalls.RemoveAt(i); } } private void GetFilterPartInternExtern() { if (cboInternExtern.SelectedItem.ToString().Equals("Intern")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Called.E164.Length > 4 || _filteredCalls[i].Caller.E164.Length > 4 || _filteredCalls[i].Caller.E164.Equals("0")) _filteredCalls.RemoveAt(i); } else if (cboInternExtern.SelectedItem.ToString().Equals("Extern")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if ((_filteredCalls[i].Called.E164.Length < 5 && _filteredCalls[i].Caller.E164.Length < 5) || _filteredCalls[i].Called.E164.Equals("0")) _filteredCalls.RemoveAt(i); } } private void GetFilteredCallsByDate() { DateTime period = DateTime.Now; switch (cboDateFilter.SelectedItem.ToString()) { case "Today": period = DateTime.Today; break; case "Last week": period = DateTime.Today.Subtract(new TimeSpan(7, 0, 0, 0)); break; case "Last month": period = DateTime.Today.AddMonths(-1); break; case "Last year": period = DateTime.Today.AddYears(-1); break; default: return; } for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Start < period) _filteredCalls.RemoveAt(i); } } _filtered calls is a list of "calls". Calls is a class that looks like this : [DataContract] public class Call { private User caller, called; private DateTime start, end; private string conferenceId; private int id; private bool isNew = false; [DataMember] public bool IsNew { get { return isNew; } set { isNew = value; } } [DataMember] public int Id { get { return id; } set { id = value; } } [DataMember] public string ConferenceId { get { return conferenceId; } set { conferenceId = value; } } [DataMember] public DateTime End { get { return end; } set { end = value; } } [DataMember] public DateTime Start { get { return start; } set { start = value; } } [DataMember] public User Called { get { return called; } set { called = value; } } [DataMember] public User Caller { get { return caller; } set { caller = value; } } Can anyone direct me to a better solution or make some suggestions.

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • Poor DB4O performance - What am I doing wrong?

    - by Jon
    I read Rob Conery's post about Object databases and thought I'd give it a go. I have a class lets say Book with 15 properties mostly string, some int, one Datetime and 2 decimals. I used Rob's code to open the database file etc to test on a Winforms project not a Web project. So I did the following: for(int i = 0; i < 1000000; i++) { var Book = new Book(); //Assign variables s.Save(Book); } s.CommitChanges(); This took at least a couple of minutes. I then tried to retrieve all data to test speed: var spp = s.All<Passports>(); This one line was quick, however, then doing: sppp.Count() as well as a seperate test doing a foreach loop and incrementing a counter took a couple of minutes. This seems quite slow to me. Am I doing something wrong or am I expecting too much?

    Read the article

  • Objective C iPhone performance issue

    - by Asad Khan
    Ok guys I am developing an iPhone app I have a Model class which follows a Singleton design pattern. Now I have an NSArray in it which is initialized to around some 1000 NSStrings in the init method. Now I need to use this data in some view controller. so I import Model.h, I create an array of NSString objects in view controller & set the data to it. But now the problem is that now I have 2000 NSStrings currently allocated, which I believe is not a good thing on iPhone due to memory considerations. releasing model object wont help because I've overrided release method to release nothing according to the pattern & I cannot change the design now because now a lot of code works on the assumption of model being a singleton. & in future maybe the initial NSStrings may grow to 2000 or even more & then I'll have 4000 NSStrings allocated at one time .... I am a little confused on how to go about it any suggestions

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >