Search Results

Search found 24220 results on 969 pages for 'performance tools'.

Page 87/969 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • What does the impression and ctr means in google webmaster

    - by KoolKabin
    I am checking google webmaster tools. I entered the search queries section. There i found alot keywords and their impression and ctr etc. I clicked on one of the query keyword there it shows the keyword and position in search result, but when i go to google.com and type the specified keyword it shows no impressions too... how do i measure find my site's impression on google.com my site: http://www.trekkingandtoursnepal.com keyword: trekking nepal

    Read the article

  • What are the various development tools beneficial for Java/J2EE developers?

    - by Saurabh
    I am a J2EE developer and have used some tools like Eclipse, ANT, SVN, etc. while developing various projects using Java, Spring, Struts, etc. Can you please tell me what are the other various tools which will help me while building the projects. Are there any tools helping in designing databases, building architecture, etc. It would be great if you can advice some tools which can help me in various software development activities.

    Read the article

  • Keyword research tool [closed]

    - by sam
    Possible Duplicate: What are some great tools to use for keyword research? in the past ive always choosen which keywords / phrases to go after by checking the traffic in the keyword tool, checking the competions for that keywords, their backlinks and pr and then by reading though all that and going on gut insticnt. I know seomoz has some functionality for this but are there any alternatives you could recomend ?

    Read the article

  • Is there a generally accepted maximum number of http requests for a web page load?

    - by MorganTiley
    I'm looking into optimizing my web application's client side performance but cannot figure out a good goal for # of http requests for a page. How does YSlow calculate the grade? This doesn't seem to be documented. Also it seems many sites like linkedin.com, amazon.com get an F grade but the page still loads quite fast. How does it fail but still perform well? Gmail get's an A grade and it has 43 unprimed/10 primed requests.

    Read the article

  • How to make the most of GWT's "Search queries"?

    - by DisgruntledGoat
    I've been looking at the "Search queries" section in Google Webmaster Tools recently, and it seems like there is a lot of potential there in finding which pages on a site need improvement. I'm trying to figure out exactly what to sort or filter on. Do I look at pages with a low average position? Low impressions but high clicks? Pages that are rising up/falling down the rankings? What is the low-hanging fruit here?

    Read the article

  • Move old website with bad SEO to other domain: redirect or start from scratch?

    - by fox_mulder
    I have an old website with very bad SEO and PR, but still with some visitors per-day. Lately I've rebuilt the app and the concept of it and I'm going to deploy everything in a new domain. I would like to not lose those old visitors but I'm wondering if redirecting that old website to my new one is going to damage the SEO in the new website. Don't know also if it deserves to make the move in the Google Webmaster Tools and Bing or better start from scratch.

    Read the article

  • Database Management for SharePoint 2010

    With each revision, SharePoint becomes more a SQL Server Database application, with everything that implies for planning and deployment. There are advantages to this: SharePoint can make use of mirroring, data-compression and remote BLOB storage. It can employ advanced tools such as data file compression, and object-level restore. DBAs can employ familiar techniques to speed SharePoint applications. Bert explains the way that SharePoint and SQL Server interact.

    Read the article

  • Very slow performance deserializing using datacontractserializer in a Silverlight Application.

    - by caryden
    Here is the situation: Silverlight 3 Application hits an asp.net hosted WCF service to get a list of items to display in a grid. Once the list is brought down to the client it is cached in IsolatedStorage. This is done by using the DataContractSerializer to serialize all of these objects to a stream which is then zipped and then encrypted. When the application is relaunched, it first loads from the cache (reversing the process above) and the deserializes the objects using the DataContractSerializer.ReadObject() method. All of this was working wonderfully under all scenarios until recently with the entire "load from cache" path (decrypt/unzip/deserialize) taking hundreds of milliseconds at most. On some development machines but not all (all machines Windows 7) the deserialize process - that is the call to ReadObject(stream) takes several minutes an seems to lock up the entire machine BUT ONLY WHEN RUNNING IN THE DEBUGGER in VS2008. Running the Debug configuration code outside the debugger has no problem. One thing that seems to look suspicious is that when you turn on stop on Exceptions, you can see that the ReadObject() throws many, many System.FormatException's indicating that a number was not in the correct format. When I turn off "Just My Code" thousands of these get dumped to the screen. None go unhandled. These occur both on the read back from the cache AND on a deserialization at the conclusion of a web service call to get the data from the WCF Service. HOWEVER, these same exceptions occur on my laptop development machine that does not experience the slowness at all. And FWIW, my laptop is really old and my desktop is a 4 core, 6GB RAM beast. Again, no problems unless running under the debugger in VS2008. Anyone else seem this? Any thoughts? Here is the bug report link: https://connect.microsoft.com/VisualStudio/feedback/details/539609/very-slow-performance-deserializing-using-datacontractserializer-in-a-silverlight-application-only-in-debugger

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Using WIndows PowerShell 1.0 or 2.0 to evaluate performance of executable files.

    - by Andry
    Hello! I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files. The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times. I did this: Function Time-It { Param ([string]$ProgramPath, [string]$Arguments) $Watch = New-Object System.Diagnostics.Stopwatch $NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick" $Watch.Start() # Starts the timer [System.Diagnostics.Process]::Start($ProgramPath, $Arguments) $Watch.Stop() # Stops the timer # Collectiong timings $Ticks = $Watch.ElapsedTicks $NSecs = $Watch.ElapsedTicks * $NsecPerTick Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)" } This function uses stopwatch. Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created... I need to stop the timer once the program ends. I thought about the Process class, thicking it held some info regarding the execution times... not lucky... How to solve this?

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Does breaking chained Select()s in LINQ to objects hurt performance?

    - by Justin
    Take the following pseudo C# code: using System; using System.Data; using System.Linq; using System.Collections.Generic; public IEnumerable<IDataRecord> GetRecords(string sql) { // DB logic goes here } public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; var ids = GetRecords(sql).Select(record => (record["EmployerID"] as int?) ?? 0); return ids.Select(employerID => new Employer(employerID) as IEmployer); } Would it be faster if the two Select() calls were combined? Is there an extra iteration in the code above? Is the following code faster? public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; return Query.Records(sql).Select(record => new Employer((record["EmployerID"] as int?) ?? 0) as IEmployer); } I think the first example is more readable if there is no difference in performance.

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • How costly performance-wise are these actions in iPhone objective-C?

    - by Alex Gosselin
    This is really a few questions in one, I'm wondering what the performance cost is for these things, as I haven't really been following a best practice of any sort for these. The answers may also be useful to other readers, if somebody knows these. (1) If I need the core data managed object context, is it bad to use #import "myAppDelegate.h" //farther down in the code: NSManagedObjectContext *context = [(myAppDelegate.h*)[[UIApplication sharedApplication] delegate] managedObjectContext]; as opposed to leaving the warning you get if you don't cast the delegate? (2) What is the cheapest way to hard-code a string? I have been using return @"myString"; on occasion in some functions where I need to pass it to a variety of places, is it better to do it this way: static NSString *str = @"myString"; return str; (3) How costly is it to subclass an object i wrote vs. making a new one, in general? (4) When I am using core data and navigating through a hierarchy of some sort, is it necessary to turn things back into faults somehow after I read some info from them? or is this done automatically? Thanks for any help.

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • ASP.NET Performance tip- Combine multiple script file into one request with script manager

    - by Jalpesh P. Vadgama
    We all need java script for our web application and we storing our JavaScript code in .js files. Now If we have more then .js file then our browser will create a new request for each .js file. Which is a little overhead in terms of performance. If you have very big enterprise application you will have so much over head for this. Asp.net Script Manager provides a feature to combine multiple JavaScript into one request but you must remember that this feature will be available only with .NET Framework 3.5 sp1 or higher versions.  Let’s take a simple example. I am having two javascript files Jscrip1.js and Jscript2.js both are having separate functions. //Jscript1.js function Task1() { alert('task1'); } Here is another one for another file. ////Jscript1.js function Task2() { alert('task2'); } Now I am adding script reference with script manager and using this function in my code like this. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now Let’s test in Firefox with Lori plug-in which will show you how many request are made for this. Here is output of that. You can see 5 Requests are there. Now let’s do same thing in with ASP.NET Script Manager combined script feature. Like following <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </CompositeScript> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now let’s run it and let’s see how many request are there like following. As you can see now we have only 4 request compare to 5 request earlier. So script manager combined multiple script into one request. So if you have lots of javascript files you can save your loading time with this with combining multiple script files into one request. Hope you liked it. Stay tuned for more!!!.. Happy programming.. Technorati Tags: ASP.NET,ScriptManager,Microsoft Ajax

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • IE8 Developer Tools won't open

    - by Dave
    Just installed IE8 on WinXP. Trying to use the Dev Tools but, clicking on the menu item or hitting F12 doesn't do anything. I can see the option in the tools menu, just can't use it. I've checked to make sure it wasn't opening minimized or off screen and I've checked the registry settings used to disable them. Those registry keys don't even exists. Suggestions? I was going to reinstall, but, thought I'd check here first.

    Read the article

  • ImportError: No module named _sqlite3

    - by Chris R.
    I'm writing for the Google App Engine and my local tests are getting the following error: --> --> --> Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3185, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3128, in _Dispatch base_env_dict=env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 515, in Dispatch base_env_dict=base_env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2387, in Dispatch self._module_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2297, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2193, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "C:\Users\Chris Reade\Documents\SI 182\Final\geneticsalesman\Final.py", line 7, in <module> from pyevolve import DBAdapters File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1922, in load_module return self.FindAndLoadModule(submodule, fullname, search_path) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1824, in FindAndLoadModule description) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1775, in LoadModuleRestricted description) File "C:\Users\Chris Reade\Documents\SI 182\Final\geneticsalesman\pyevolve\DBAdapters.py", line 21, in <module> import sqlite3 File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1922, in load_module return self.FindAndLoadModule(submodule, fullname, search_path) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1824, in FindAndLoadModule description) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1775, in LoadModuleRestricted description) File "C:\Python26\lib\sqlite3\__init__.py", line 24, in <module> from dbapi2 import * File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1922, in load_module return self.FindAndLoadModule(submodule, fullname, search_path) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1824, in FindAndLoadModule description) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1272, in Decorate return func(self, *args, **kwargs) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1775, in LoadModuleRestricted description) File "C:\Python26\lib\sqlite3\dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3 My python direction has a lib file for sqlite3 but I can't tell why it can't find it. Any help would be greatly appreciated.

    Read the article

  • JQuery tools.scrollable conflict with navbar Ajax in CakePHP

    - by user269098
    In our views/layout/default.ctp file we have an element that's used to implement a cascading combo select box which uses JQuery ajax to update the fields. This selector is part of our navbar and is loaded on every page of the site. The default.ctp has these includes in the : <?php echo $html->charset(); ?> <title><?php echo $title_for_layout; //if(Configure::read('debug')) //echo "cake: " . Configure::version(); ?></title> <?php echo $html->css('style'); echo $javascript->link('jquery', true); echo $javascript->link('jquery.form', true); echo $javascript->link('jquery.url', true); echo $javascript->link('navbar', true); echo $html->script('tools.expose-1.0.3.js', array('inline' => false)); //echo $scripts_for_layout; ?> On the views/cars/index.ctp page we have: echo $javascript->link('tools.tabs-1.0.1', true); echo $javascript->link('tools.overlay-1.0.4', true); echo $javascript->link('jquery.form', true); echo $javascript->link('tools.scrollable-1.0.5', true); The issue is that on every other page the navbar combo box selector works. However, on the cars/index.ctp it doesn't work. It only works when we comment out the 'tools.scrollable' include. Any ideas why there is this conflict? We have been stuck on this problem for months...

    Read the article

  • Any significant performance improvement by using bitwise operators instead of plain int sums in C#?

    - by tunnuz
    Hello, I started working with C# a few weeks ago and I'm now in a situation where I need to build up a "bit set" flag to handle different cases in an algorithm. I have thus two options: enum RelativePositioning { LEFT = 0, RIGHT = 1, BOTTOM = 2, TOP = 3, FRONT = 4, BACK = 5 } pos = ((eye.X < minCorner.X ? 1 : 0) << RelativePositioning.LEFT) + ((eye.X > maxCorner.X ? 1 : 0) << RelativePositioning.RIGHT) + ((eye.Y < minCorner.Y ? 1 : 0) << RelativePositioning.BOTTOM) + ((eye.Y > maxCorner.Y ? 1 : 0) << RelativePositioning.TOP) + ((eye.Z < minCorner.Z ? 1 : 0) << RelativePositioning.FRONT) + ((eye.Z > maxCorner.Z ? 1 : 0) << RelativePositioning.BACK); Or: enum RelativePositioning { LEFT = 1, RIGHT = 2, BOTTOM = 4, TOP = 8, FRONT = 16, BACK = 32 } if (eye.X < minCorner.X) { pos += RelativePositioning.LEFT; } if (eye.X > maxCorner.X) { pos += RelativePositioning.RIGHT; } if (eye.Y < minCorner.Y) { pos += RelativePositioning.BOTTOM; } if (eye.Y > maxCorner.Y) { pos += RelativePositioning.TOP; } if (eye.Z > maxCorner.Z) { pos += RelativePositioning.FRONT; } if (eye.Z < minCorner.Z) { pos += RelativePositioning.BACK; } I could have used something as ((eye.X > maxCorner.X) << 1) but C# does not allow implicit casting from bool to int and the ternary operator was similar enough. My question now is: is there any performance improvement in using the first version over the second? Thank you Tommaso

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >