Search Results

Search found 23661 results on 947 pages for 'worse is better'.

Page 287/947 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • Modified jQuery innerfade sluggish

    - by Jay Hankins
    HI. I am using the jQuery innerfade plugin to scroll through some images on my site. Innerfade is sluggish to move on Firefox 3.6.3, and IE 8. IE 8 is much worse than Firefox, and Chrome runs smoothly. Can you analyze my code to see what the problem is? I've used the modified innerfade from here: http://www.stylephp.com/2009/01/17/customizing-jquery-innerfade-plug-in-adding-controls-navigation-and-caption/ My sites are here: Without bg image: http://dl.dropbox.com/u/145908/doozie2/index.html With bg image: http://dl.dropbox.com/u/145908/doozie2/index2.html Removing my background image fixes the situation; it's not that big of a file. I don't understand why that is an issue. I am using a technique to resize the image to fit the browser window, as you can see in the CSS. Thanks so much. P.S. Sorry, I'm a new user so I can't post more than one link. Please copy and paste the address between <.

    Read the article

  • Are there any applications written in the Io programming language? (Or, distributing Io applications

    - by Rayne
    I've recently become interested in prototype-based OOP, and I've been playing with Io and Ioke. Distributing an application with Ioke is simple. It's on the JVM. Need I say more? However, I'm absolutely stumped as to how one would distribute an Io application, especially on Windows. It's not like you can have end-users compile Io to run your application. I was actually shocked the Io has gone for 8 years without forming some sort of standards for things like distribution. Ruby has gems, Java has jars, and so on. The worse thing about it is, I can't find a single application written in Io to maybe steal ideas on distribution from. Maybe I suck at google searching (Io is a horrible search name, by the way ;P). Is there any sort of canonical way to distribute Io applications? Are there even any Io applications in existence, or am I just missing the point? I'm not sure if this should be community wiki or not. If you think it should, comment and let me know.

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Where are tables in Mnesia located?

    - by Sanoj
    I try to compare Mnesia with more traditional databases. As I understand it tables in Mnesia can be located to: ram_copies - tables are stored in RAM only, so no durability as in ACID. disc_copies - tables are located on disc and a copy is located in RAM, so the table can not be bigger than the available memory? disc_only_copies - tables are located to disc only, so no caching in memory and worse performance? And the size of the table are limited to the size of dets or the table has to be fragmented. So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL. I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database? The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance. Are disc_copies and the other types of tables in Mnesia so limited as I have understood?

    Read the article

  • Running Magento for multiple clients - single Installaton vs. multiple installations

    - by Chris Hopkins
    Hi There I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now. I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription. It seems there are a few options available to be but I am worried about the performance I will get out of the various options. Option 1) Single install using AITOC advanced permissions module So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness. Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html Option 2) Multiple installations of Magento on one server for each shop So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own? Option 3) Use lots of servers and lots of Magento installations I just so do not want to do this. Option 4) Buy Magento Enterprise I do not have the money to do this. So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module? Thanks for reading and thanks in advance for any help - Chris Hopkins

    Read the article

  • SQLDeveloper using over 100MB of PGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • Connection details & timeouts in a java web service client

    - by f1sh
    Hello fellow Coders, I have to implement a webservice client to a given WSDL file. I used the SDK's 'wsimport' tool to create Java classes from the WSDL as well as a class that wrap's the webservice's only method (enhanceAddress(auth, param, address)) into a simple java method. So far, so good. The webservice is functional and returning results correcty. The code looks like this: try { EnhancedAddressList uniservResponse = getWebservicePort().enhanceAddress(m_auth, m_param, uniservAddress); //Where the Port^ is the HTTP Soap 1.2 Endpoint }catch (Throwable e) { throw new AddressValidationException("Error during uniserv webservice request.", e); } The Problem now: I need to get Information about the connection and any error that might occur in order to populate various JMX values (such as COUNT_READ_TIMEOUT, COUNT_CONNECT_TIMEOUT, ...) Unfortunately, the method does not officially throw any Exceptions, so in order to get details about a ConnectException, i need to use getCause() on the ClientTransportException that will be thrown. Even worse: I tried to test the read timeout value, but there is none. I changed the service's location in the wsdl file to post the request to a php script that simply waits forever and does not return. Guess what: The web service client does not time out but waits forever as well (I killed the app after 30+ minutes of waiting). That is not an option for my application as i eventually run out of tcp connections if some of them get 'stuck'. The enhanceAddress(auth, param, address) method is not implemented but annotated with javax.jws.* Annotations, meaning that i cannot see/change/inspect the code that is actually executed. Do i have any option but to throw the whole wsimport/javax.jsw-stuff away and implement my own soap client?

    Read the article

  • Silverlight Data Access - how to keep the gruntwork on the server

    - by akaphenom
    What technologies are used / recommended for HTTP Rpc Calls from Silverlight. My Server Side stack is JBoss (servlets / json_rpc [jabsorb]), and we have a ton of business logic (object creation, validation, persistence, server side events) in place that I still want to take advantage of. This is our first attempt at bringing an applet style ria to our product, and ideally we keep both HTML and Silverlight versions. For better or worse the powers that be have pushed us down the silverlight path, and while flex / java fx / silverlight is an interesting debate, that question is removed from the equation. We just have to find a way to get silverlight to behave with our classes. Should I be defining .NET Class representation of our JSON objects and the methodology to serialize / deserialize access to those objects? IE "blah.com/dispenseRpc?servlet=xxxx&p1=blah&p2=blahblah creating functions that invoke the web request and convert the incomming response string to objects? Another way would be to reverse engineer the .NET wcf(or whatever) communications and implement the handler on the Java side that invokes the correct server side code and returns what .NET expects back. But that sounds much trickier. T

    Read the article

  • PHP (A few questions) OO, refactoring, eclipse

    - by jax
    I am using PHP in eclipse. It works ok, I can connect to my remote site, there is colour coding of code elements and some code hints. I realise this may be too long to answer all questions, if you have a good answer for one part, answering just that is ok. Firstly General Coding I have found that it is easy to loose track of included files and their variables. For example if there was a database $cursor it is difficult to remember or even know that it was declared in the included file (this becomes much worse the more files you include). How are people dealing with this? How are people documenting their code - in particular the required GET and POST data? Secondly OO Development: Should I be going full OO in my development. Currently I have a functions library which I can include and have separated each "task" into a separate file. It is a bit nasty but it works. If I go OO how do I structure the directories in PHP, java uses packages - what about php? How should I name my files, should I use all lower case with _ for spaces "hello_world.php"? Should I name classes with Uppercase like Java "HelloWorld.php"? Is there a different naming convention for Classes and regular function files? Thirdly Refactoring I must say this is a real pain. If I change the name of a variable in one place I have to go through whole document and each file that included this file and change the name their too. Of course, errors everywhere is what results. How are people dealing with this problem? In Java if you change the name in one place it changes everywhere. Are there any plugins to improve php refactoring? I am using the official PHP version of Eclipse from their website. thanks

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • Configure Windows firewall to prevent an application from listening on a specific port [closed]

    - by U-D13
    The issue: there are many applications struggling to listen on port 80 (Skype, Teamviewer et al.), and to many of them that even is not essential (in the sense that you can have a httpd running and blocking the http port, and the other application won't even squeak about being unable to open the port). What makes things worse, some of the apps are... Well, I suppose, that it's okay that the mentally impaired are being integrated in the society by giving them a job to do, but... Programming requires some intellectual effort, in my humble opinion... What I mean is that there is no way to configure the app not to use specific ports (that's what you get for using proprietary software) - you can either add it to windows firewall exceptions (and succumb to undesired port opening behavior) or not (and risk losing most - if not all - of the functionality). Technically, it is not impossible for the firewall to deny an application opening an incoming port even if the application is in the exception list. And if this functionality is built into the Windows firewall somewhere, there should be a way to activate it. So, what I want to know is: whether there exists such an option, and if it does how to activate it.

    Read the article

  • Mocking an object that uses jni using EasyMock

    - by Visage
    So my class under test has code that looks braodly like this public void doSomething(int param) { Report report = new Report() ...do some calculations report.someMethod(someData) } my intention was to extract the construction of report into a protected method and override it to use a mock object that I could then test to ensure that someMethod had been called with the right data. So far so good. But Report isnt under my control, and to mkae things worse it uses JNI to load a library at runtime. If I do Report report = EasyMock.createMock(Report.class) then EasyMock attempts to use reflection to find out the class members, but this causes an attempt to load the JNI library, which fails (the JNI libraries are only available on UNIX). Im considering two things: a) Introduce a ReportWrapper interface with two implementations, one of which will delegate calls to an real Report (so basically a Proxy), and a second which will basically use a mock object. or b) instead of calling someMethod, call a protected method which will in turn call someMethod that I can override in a testing subclass. Either way it seems nasty. Any better ways?

    Read the article

  • Objects in JavaScript defined and undefined at the same time (in a FireFox extension)

    - by Alexey Romanov
    I am chasing down a bug in a FireFox extension. I've finally managed to see it for myself (I've only had reports before) and I can't understand how what I saw is possible. One error message from my extension in the Error Console is "gBrowser is not defined". This by itself would be surprising enough, since the overlay is over browser.xul and navigator.xul, and I expect gBrowser to be available from both. Even worse is the actual place where it happens: line 101 of nextplease.js. That is, inside the function isTopLevelDocument, which is only called from onContentLoaded, which is only called from onLoad here: gBrowser.addEventListener(this.loadType, function (event) { nextplease.loadListener.onContentLoaded(event); }, true); So gBrowser is defined in onLoad, but somehow undefined in isTopLevelDocument. When I tried to actually use the extension, I got another error: "nextplease is not defined". The interesting thing is that it happened on lines 853 and 857. That is, inside the functions nextplease.getNextLink = function () { nextplease.getLink(window.content, nextplease.NextPhrasesMap, nextplease.NextImagesMap, nextplease.isNextRegExp, nextplease.NEXT_SEARCH_TYPE); } nextplease.getPrevLink = function () { nextplease.getLink(window.content, nextplease.PrevPhrasesMap, nextplease.PrevImagesMap, nextplease.isPrevRegExp, nextplease.PREV_SEARCH_TYPE); } So nextplease is somehow defined enough to call these functions, but isn't defined inside them. Finally, executing typeof(nextplease) in Execute JS returns "object". Same for gBrowser. How can this happen? Any ideas?

    Read the article

  • Would it be possible to have a UTF-8-like encoding limited to 3 bytes per character?

    - by dan04
    UTF-8 requires 4 bytes to represent characters outside the BMP. That's not bad; it's no worse than UTF-16 or UTF-32. But it's not optimal (in terms of storage space). There are 13 bytes (C0-C1 and F5-FF) that are never used. And multi-byte sequences that are not used such as the ones corresponding to "overlong" encodings. If these had been available to encode characters, then more of them could have been represented by 2-byte or 3-byte sequences (of course, at the expense of making the implementation more complex). Would it be possible to represent all 1,114,112 Unicode code points by a UTF-8-like encoding with at most 3 bytes per character? If not, what is the maximum number of characters such an encoding could represent? By "UTF-8-like", I mean, at minimum: The bytes 0x00-0x7F are reserved for ASCII characters. Byte-oriented find / index functions work correctly. You can't find a false positive by starting in the middle of a character like you can in Shift-JIS.

    Read the article

  • Linking error while using Qt static built libraries

    - by Kamran Amini
    I hope this is not a duplicate. Recently I'm developing a native C++ application using Qt 4.8.3 and VS2008. Since clients run the application on their naked machines, they need to install VC++ 2008 Redistribution package. So I decided to make it statically linked. I changed my project settings (C/C++ Code Generation Runtime Library) to /MTd. Also I compiled Qt again, this time using following commands for a static building; originally found on this blog Static Qt with static CRT (VS 2008) 1- replaced -MD with -MT in lines QMAKE_CFLAGS_RELEASE and QMAKE_CFLAGS_DEBUG in %QDIR%\mkspecs\win32-msvc2008\qmake.conf 2- nmake confclean 3- configure -static -platform win32-msvc2008 -no-webkit 4- nmake sub-src I compiled Qt successfully. But when I tried again to compile my application, it gave me some strange errors. 1>Linking... 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::deref(void)" (?deref@QBasicAtomicInt@@QAE_NXZ) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::operator!=(int)const " (??9QBasicAtomicInt@@QBE_NH@Z) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: __thiscall QString::~QString(void)" (??1QString@@QAE@XZ) already defined in QtCored4.lib(QtCored4.dll) I changed some lib files but with each change, situation got worse; for example I tried to use QtCored.lib instead of QtCored4.lib because it is newly created after compilation. I think I've missed something in building static Qt libs. Thanks.

    Read the article

  • Use of var keyword in C#

    - by kronoz
    After discussion with colleagues regarding the use of the 'var' keyword in C# 3 I wondered what people's opinions were on the appropriate uses of type inference via var? For example I rather lazily used var in questionable circumstances, e.g.:- foreach(var item in someList) { // ... } // Type of 'item' not clear. var something = someObject.SomeProperty; // Type of 'something' not clear. var something = someMethod(); // Type of 'something' not clear. More legitimate uses of var are as follows:- var l = new List<string>(); // Obvious what l will be. var s = new SomeClass(); // Obvious what s will be. Interestingly LINQ seems to be a bit of a grey area, e.g.:- var results = from r in dataContext.SomeTable select r; // Not *entirely clear* what results will be here. It's clear what results will be in that it will be a type which implements IEnumerable, however it isn't entirely obvious in the same way a var declaring a new object is. It's even worse when it comes to LINQ to objects, e.g.:- var results = from item in someList where item != 3 select item; This is no better than the equivilent foreach(var item in someList) { // ... } equivilent. There is a real concern about type safety here - for example if we were to place the results of that query into an overloaded method that accepted IEnumerable<int> and IEnumerable<double> the caller might inadvertently pass in the wrong type. Edit - var does maintain strong typing but the question is really whether it's dangerous for the type to not be immediately apparent on definition, something which is magnified when overloads mean compiler errors might not be issued when you unintentionally pass the wrong type to a method. Related Question: http://stackoverflow.com/questions/633474/c-do-you-use-var

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • jQuery .attr() crashing Internet Explorer

    - by Sunyatasattva
    Hello everyone, This is my first time posting a question here, as I usually try to find solutions myself. This one, though, being an IE issue, just drives me crazy. I use jQuery cycle plug-in on a website I made and, to populate a caption div, I use a little function that is called after the image is loaded, which uses the "alt" attribute of the image. This seems to exasperate Internet Explorer, which, doesn't have the time to fulfill this apparently so-complicated task, and, as the slideshow cycles, it enters in an infinite loop and eventually crashes – the newer the version, the worse the crash: the older IEs just display an error message saying “The webpage cannot be displayed”, while the newer (7 and 8) completely crash the system. I have no idea on how to solve or work around this. Here is the problematic code. function changeCaption() { var caption = $("img", this).attr("alt"); $('#caption').fadeIn("slow").html(caption); } Thanks in advance for any pointer: I am amazed as how something so simple and globally recognized (didn't encounter any other browser who had problem with this), can cause a problem so big. I also read somewhere that being able to crash a browser remotely is a serious issue :) Lucio

    Read the article

  • Problems when handling orientation changes

    - by nixau
    Hi all, I need to handle orientation changes in my Android application. For this purpose I decided to use OrientationEventListener convenience class. But his callback method is given somewhat strange behavior. My application starts in the portrait mode and then eventually switches to the lanscape one. I have some custom code executing in the callback onOrientationChanged method that provides some additional UI handling logic - it has a few calls to findViewById. What is strange is that when switching back from landscape to portrait mode onOrientationChanged callback is called twice, and what's even worse - the second call is dealing with bad Context - findViewById method starts returning null. These calls are made right from the MainThread @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); listener = new OrientationListener(); } @Override protected void onResume() { super.onResume(); // enabling listening listener.enable(); } @Override protected void onPause() { super.onPause(); // disabling listening listener.disable(); } I've replicated the same behavior with a dummy Activity without any logic except for one that deals with orientation hadling. I initiate orientation switch from the Android 2.2 emulator by pressing Ctrl+F11 What could be wrong? Upd: Inner class that implements OrientationEventListener private class OrientationListener extends OrientationEventListener { public OrientationL() { super(getBaseContext()); } @Override public void onOrientationChanged(int orientation) { toString(); } } }

    Read the article

  • How to make Jtable column contain checkboxes?

    - by theraven
    Preface: I am horrible with java, and worse with java ui components. I have found several different tutorials on how to add buttons to tables, however I am struggling with adding checkboxes. I need to have a column that draws a text box ticked on default (cell renderer i think handles that), then on click of tickbox, unticks the box, redraws said box, and fires off an event somewhere I can track. currently I have a custom cellrenderer: public class GraphButtonCellRenderer extends JCheckBox implements TableCellRenderer { public GraphButtonCellRenderer() { } public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { if(isSelected) setSelected(true); else setSelected(false); setMargin(new Insets(0, 16, 0, 0)); setIconTextGap(0); setBackground(new Color(255,255,255,0)); return this; }} Which currently handles drawing the tick box, but only ticks and unticks the box if that row is selected. But I don't know how to handle the events. Really what I am asking is possibly a link to a good tutorial on how to add checkboxes cleanly to a JTable. Any assist is greatly appreciated :)

    Read the article

  • Is MVC 2 client-side validation broken in Visual Studio 2010 RC?

    - by Will
    I can't seem to get client side validation working with the version of MVC released with Visual Studio 2010 RC. I've tried it with two projects -- one upgrade from 1.0, and one using the template that came with VS. I'd think the template version would work, but it doesn't. Added the following scripts: <script type="text/javascript" src="<%= Url.Content("~/Scripts/MicrosoftMvcValidation.js") %>"> </script> <script type="text/javascript" src="<%= Url.Content("~/Scripts/jquery.validate.js")%>"> </script> which are downloaded to the client correctly. Added the following to my form page: <% Html.EnableClientValidation(); %> <%--yes, am aware of the EndForm() bug! --%> <% using (Html.BeginForm()) { %> <%--snip --%> and I can see the client validation scripts have been added to the bottom of the form. But still client validation never happens. What is worse is that in my upgraded project, the client validation scripts are never output in the page! PLEASE NOTE: I am specifically asking about the version of MVC2 that came with VS2010 RC. Also, I do know how to google; please don't waste anybody's time searching and answering if you aren't familiar with this issue in the release candidate of Visual Studio. Thanks.

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >