Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 88/307 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Javascript + Firebug : "cannot access optimized closure" What does it mean?

    - by interstar
    I just got the following error in a piece of javascript (in Firefox 3.5, with Firebug running) cannot access optimized closure I know, superficially, what caused the error. I had a line options.length() instead of options.length Fixing this bug, made the message go away. But I'm curious. What does this mean? What is an optimized closure? Is optimizing an enclosure something that the javascript interpretter does automatically? What does it do?

    Read the article

  • Overlapping template partial specialization when wanting an "override" case: how to avoid the error?

    - by user173342
    I'm dealing with a pretty simple template struct that has an enum value set by whether its 2 template parameters are the same type or not. template<typename T, typename U> struct is_same { enum { value = 0 }; }; template<typename T> struct is_same<T, T> { enum { value = 1 }; }; This is part of a library (Eigen), so I can't alter this design without breaking it. When value == 0, a static assert aborts compilation. So I have a special numerical templated class SpecialCase that can do ops with different specializations of itself. So I set up an override like this: template<typename T> struct SpecialCase { ... }; template<typename LT, typename RT> struct is_same<SpecialCase<LT>, SpecialCase<RT>> { enum { value = 1 }; }; However, this throws the error: more than one partial specialization matches the template argument list Now, I understand why. It's the case where LT == RT, which steps on the toes of is_same<T, T>. What I don't know is how to keep my SpecialCase override and get rid of the error. Is there a trick to get around this? edit: To clarify, I need all cases where LT != RT to also be considered the same (have value 1). Not just LT == RT.

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Checking for a variable in the executable

    - by rboorgapally
    Is there a way to know whether a variable is defined, by looking at the executable. Lets say I declare int i; in the main function. By compiling and linking I get an executable my_program.exe. Now, by looking inside my_program.exe, can I say if it has an int eger variable i ?

    Read the article

  • Question about Cost in Oracle Explain Plan

    - by Will
    When Oracle is estimating the 'Cost' for certain queries, does it actually look at the amount of data (rows) in a table? For example: If I'm doing a full table scan of employees for name='Bob', does it estimate the cost by counting the amount of existing rows, or is it always a set cost?

    Read the article

  • How can I track the last location of a shipment effeciently using latest date of reporting?

    - by hash
    I need to find the latest location of each cargo item in a consignment. We mostly do this by looking at the route selected for a consignment and then finding the latest (max) time entered against nodes of this route. For example if a route has 5 nodes and we have entered timings against first 3 nodes, then the latest timing (max time) will tell us its location among the 3 nodes. I am really stuck on this query regarding performance issues. Even on few hundred rows, it takes more than 2 minutes. Please suggest how can I improve this query or any alternative approach I should acquire? Note: ATA= Actual Time of Arrival and ATD = Actual Time of Departure SELECT DISTINCT(c.id) as cid,c.ref as cons_ref , c.Name, c.CustRef FROM consignments c INNER JOIN routes r ON c.Route = r.ID INNER JOIN routes_nodes rn ON rn.Route = r.ID INNER JOIN cargo_timing ct ON c.ID=ct.ConsignmentID INNER JOIN (SELECT t.ConsignmentID, Max(t.firstata) as MaxDate FROM cargo_timing t GROUP BY t.ConsignmentID ) as TMax ON TMax.MaxDate=ct.firstata AND TMax.ConsignmentID=c.ID INNER JOIN nodes an ON ct.routenodeid = an.ID INNER JOIN contract cor ON cor.ID = c.Contract WHERE c.Type = 'Road' AND ( c.ATD = 0 AND c.ATA != 0 ) AND (cor.contract_reference in ('Generic','BP001','020-543-912')) ORDER BY c.ref ASC

    Read the article

  • Garbage collection when compiling to C

    - by Jules
    What are the techniques of garbage collection when compiling a garbage collected language to C? I know of two: maintain a shadow stack that saves all roots explicitly in a data structure use a conservative garbage collector like Boehm's The first technique is slow, because you have to maintain the shadow stack. Potentially every time a function is called, you need to save the local variables in a data structure. The second technique is also slow, and inherently does not reclaim all garbage because of using a conservative garbage collector. My question is: what is the state of the art of garbage collection when compiling to C. Note that I do not mean a convenient way to do garbage collection when programming in C (this is the goal of Boehm's garbage collector), just a way to do garbage collection when compiling to C.

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • Why index_merge is not used here using MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+

    Read the article

  • Should not a tail-recursive function also be faster?

    - by Balint Erdi
    I have the following Clojure code to calculate a number with a certain "factorable" property. (what exactly the code does is secondary). (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (factor-9 (quot n 10)))))) Now, I'm into TCO and realize that Clojure can only provide tail-recursion if explicitly told so using the recur keyword. So I've rewritten the code to do that (replacing factor-9 with recur being the only difference): (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (recur (quot n 10)))))) To my knowledge, TCO has a double benefit. The first one is that it does not use the stack as heavily as a non tail-recursive call and thus does not blow it on larger recursions. The second, I think is that consequently it's faster since it can be converted to a loop. Now, I've made a very rough benchmark and have not seen any difference between the two implementations although. Am I wrong in my second assumption or does this have something to do with running on the JVM (which does not have automatic TCO) and recur using a trick to achieve it? Thank you.

    Read the article

  • Using generics in F# to create an EnumArray type

    - by Matthew
    I've created an F# class to represent an array that allocates one element for each value of a specific enum. I'm using an explicit constructor that creates a dictionary from enum values to array indices, and an Item property so that you can write expressions like: let my_array = new EnumArray<EnumType, int> my_array.[EnumType.enum_value] <- 5 However, I'm getting the following obscure compilation error at the line marked with '// FS0670' below. error FS0670: This code is not sufficiently generic. The type variable ^e when ^e : enum<int> and ^e : equality and ^e : (static member op_Explicit : ^e -> int) could not be generalized because it would escape its scope. I'm at a loss - can anyone explain this error? type EnumArray< 'e, 'v when 'e : enum<int> and 'e : equality and ^e : (static member op_Explicit : ^e -> int) > = val enum_to_int : Dictionary<'e, int> val a : 'v array new() as this = { enum_to_int = new Dictionary<'e, int>() a = Array.zeroCreate (Enum.GetValues(typeof<'e>).Length) } then for (e : obj) in Enum.GetValues(typeof<'e>) do this.enum_to_int.Add(e :?> 'e, int(e :?> 'e)) member this.Item with get (idx : 'e) : 'v = this.a.[this.enum_to_int.[idx]] // FS0670 and set (idx : 'e) (c : 'v) = this.a.[this.enum_to_int.[idx]] <- c

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • Most flexible minimizer/compressor for ASP.NET MVC 2?

    - by AlexanderN
    From your experience, what's the most flexible minimizer/compressor (JS+CSS) for ASP.NET MVC you've dealt with? So far mbcompress doesn't seem to be too MVC friendly weboptimizer.codeplex.com lacks documentation clientdependency.codeplex.com is still in beta compress2 seems like a good candidate, but haven't tried it yet mvcscriptmanager only combines and compresses javascript but not CSS By flexible I mean Choose what should be compressed, minified, and combined Add exceptions. E.g. if debug don't compress XYZ.JS or don't minify ABC.CSS Caching In the end, it should help offer the best YSLOW score. If you know of any other assemblies out there, please list them also.

    Read the article

  • Should i use HttpResponse.End() for a fast webapp?

    - by acidzombie24
    HttpResponse.End() seems to throw an exception according to msdn. Right now i have the choice of returning a value to say end thread (it only goes 2 functions deep) or i can call end(). I know that throwing exceptions is significantly slower (read the comment for a C#/.NET test) so if i want a fast webapp should i consider not calling it when it is trivially easy to not call it? -edit- I do have a function call in certain functions and in the constructor in classes to ensure the user is logged in. So i call HttpResponse.End() in enough places although hopefully in regular site usage it doesn't occur too often.

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • SQL Server indexed view matching of views with joins not working

    - by usr
    Does anyone have experience of when SQL Servr 2008 R2 is able to automatically match indexed view (also known as materialized views) that contain joins to a query? for example the view select dbo.Orders.Date, dbo.OrderDetails.ProductID from dbo.OrderDetails join dbo.Orders on dbo.OrderDetails.OrderID = dbo.Orders.ID cannot be automatically matched to the same exact query. When I select directly from this view ith (noexpand) I actually get a much faster query plan that does a scan on the clustered index of the indexed view. Can I get SQL Server to do this matching automatically? I have quite a few queries and views... I am on enterprise edition of SQL Server 2008 R2.

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >