Search Results

Search found 9034 results on 362 pages for 'plan guide'.

Page 35/362 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • DataCash @ Hackathon

    - by John Breakwell
    Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2013/06/28/datacash--hackathon.aspxBack in May, DataCash was a sponsor for one of the biggest networking events for payments developers – Trans-hacktion. The 3-day Hackathon, organised by Birdback, was focused on the latest innovations in the payments and financial technology and held at the London Google Campus.  The event included demos from DataCash and other payments companies followed by hacking sessions. Teams had to hack a product that used partner APIs and present the hack in 3 minutes on the final day. The prizes up for grabs were: KingHacker3D Printer & Champagne 1stPebble Watch & 1 year of GitHub Silver plan 2ndAIAIAI Headphones & 1 year of GitHub Bronze plan 3rdRaspberry Pi & 6 months of GitHub Bronze plan APIUp Bracelet. Nintendo NES + Super Mario Game ANDBerg Cloud Little Printer & 100$ AWS credit & more...

    Read the article

  • Problems creating a debdiff

    - by Chris Wilson
    I'm following this guide to create a debdiff for a package I'm patching. Everything goes fine until step number 8 and I attempt to create the debdiff after committing the changes. The package in question is Zim, pulled form Launchpad using bzr branch lp:zim and according to this guide I should execute the following command to create the debdiff: debdiff zim_0.49.dsc zim_0.49ubuntu1.dsc > zim_0.49ubuntu1.debdiff however, when I actually try to execute this command, I get the following error: debdiff: fatal error at line 314: Can't read file: zim_0.49.dsc Upon inspection of the directory in which the files created from debuild -S (step 6) are deposited, I find zim_0.49ubuntu1_source.changes zim_0.49ubuntu1.dsc zim_0.49ubuntu1.tar.gz zim_0.49ubuntu1_source.build but no sign of zim_0.49.dsc. I could probably create one by debuilding the package as soon as I check out the code, before starting work, but that would add an extraneous entry in the changelog. Is there a step missing from the guide that creates zim_0.49.dsc or is the file itself missing from the source?

    Read the article

  • The 10 Best How-To Geek Guides for Perfect Christmas Photos

    - by Eric Z Goodnight
    Taking a lot of pictures this Christmas? Here’s a roundup of some of our favorite How-Tos to help you get the best possible photo prints this year. You might use Photoshop, Free Software, or even Microsoft Word; How-To Geek has something for every user in this collection of How-Tos to help you get the best prints this holiday season Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor MTCrypt Is an Efficient Front End for Mounting TrueCrypt Volumes 10 Things You Should Do with Your New Android Phone Walking Through the Park on a Snowy Night Wallpaper Track Weather Conditions with the Weather Underground Web App for Chrome These 8-Bit Mario Wood Magnets Put Video Games on Your Fridge Christmas Themes 4 Pack for Chrome and Iron Browser

    Read the article

  • HTG Explains: Why Does Photo Paper Improve Print Quality?

    - by Eric Z Goodnight
    So you’ve shelled out the money for a fancy inkjet photo printer, only you’re not impressed with the images you’re getting out of your standard office paper. Have you ever wondered why that photo paper works so much better? Surely, paper is paper, right? What can be so special about it? In this article, we’ll explore the differences between regular typing paper, why these differences are good for printing, and how to take advantage of them for superior photographic printing Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • Hyperion Smart View Assistance

    - by p.anda
    (in via Akhter) Seeking more information or assistance with Hyperion Smart View?  The Oracle Technology Network (OTN) is a great first place to "stop-by".  Here the site provides access to the latest installer, general product documentation as well as Whitepapers, "Whats New" and "Oracle by Example" tutorials: OTN: Overview | Downloads | Documentation | Tutorials  For the latest documentation including Readme, New Features, User's Guide, Developer's Guide and Accessibility Guide visit: Oracle Hyperion Smart View for Office Documentation Release 11.1.2.5 Several "My Oracle Support" Knowledge (KM) Articles are available including: OBIEE 11.1.1.7 - New Features And Recommendations For Working With Microsoft Office [ Doc ID 1558070.1]- How to Integrate OBIEE with Microsoft Office Using Oracle Hyperion Smart View For Office [ Doc ID 1576336.1] - How To Create New Report From Scratch Within Excel Using Smart View [ Doc ID 1576596.1] These along with additional KM articles are being indexed in a Master Note - to keep up to-date with new articles bookmark this index page at: Master Note For Oracle Hyperion Smart View For Office Issues in OBIEE Doc ID 1589028.1

    Read the article

  • Improve Your Photo Prints By Properly Preparing Your Printer

    - by Eric Z Goodnight
    Whether your photo printer is new or has been collecting dust between the holidays, you’ve likely spent a few frustrating moments setting up the machine. But did you know proper setup can improve the quality of your prints? Spend a few moments looking over the basics, and see why it can be a good idea to keep your drivers updated, learn about some basic printer maintenance, and see some advanced options for setting up great prints. Keep reading Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation Winter Theme for Windows 7 from Microsoft Score Free In-Flight Wi-Fi Courtesy of Google Chrome

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • In what cases should I install (and configure) Postfix as a desktop user?

    - by Gonzalo
    Possible cases: 1) I plan to do Debian packaging (this case is the motivation since postfix gets installed as a dependency of some development packages, so it means that in such a case might be necessary). 2) I plan to use Evolution and a Internet provider mail account. 3) I plan to use gmail. Surely if I read Postfix documentation I may find the answer, but its huge and couldn't find it. In any case how (or where) should I find the answer to a question like that by myself? (I really tried)

    Read the article

  • Speaking at SharePoint Saturday Cincinnati

    - by Enrique Lima
    I will be not only attending SPS Cincinnati, but also speaking. And I am very happy and excited about it! The topic: The difference between learning and training: Creating a SharePoint based Learning Management System The description: Training and learning have been defined as synonyms by many organizations. The difference is, training has focused on a classic and traditional model. Learning on the other hand, refers to achieving something from the receiving side of the story, not just delivery. In focusing on driving adoption it is important to have a strategy where learning is also part of the plan. This session focuses on how to create a SharePoint Learning Plan, and how to deliver the plan through the implementation of a Learning Management System. Come join us! More information about SharePoint Saturday Cincinnati can be found here: http://sharepointsaturday.org/cincinnati/default.aspx

    Read the article

  • P90X or How I Stopped Worrying and Love Exercise

    - by Matt Christian
    Last Wednesday, after many UPS delivery failures, I received P90X in the mail.  P90X is a series of DVD's and a nutrition guide you use to shed pounds and gain muscle.  Odds are you've seen the infomercial on TV at some point if you watch a little tube now and again.  I started last Thursday and am still standing to tell this tale. At it's core, P90X is a 12 DVD set of exercise videos.  Each video is comprised of a different workout routine that typically last around an hour (some up to 1 1/2 hours).  Every day you are supposed to do one of the workouts which are different every day (sometimes you may repeat a shorter 6 min workout dedicated to abs twice a week).  There are different 'programs' focused on different areas, for weight loss you do the Lean Program, standard weight loss and muscle gain do the Regular Program, and for those hardcore health-nuts, the Insane Program (which consists of 2 - 1 hour long exercises per day).  Each Program has a different set of workouts per week which you repeat for 3 weeks, followed by a 'Relaxation Week' which is essentially a slightly different order.  After the month of workouts is over, you've finished 1 phase out of 3.  P90X takes 90 days, split into 3 Phases (1 phase per month).  Every phase has a different workout order which is also focused on different areas (Weight Loss, Muscle Gain, etc...)  With the DVD's you also get a small glossy book of about 100 pages detailing the different workouts and the different programs as well as a sample workout to see if you're even ready to start P90X. The second part of P90X, which can also be considered the 'core' (actually the other half of the core) is the nutrition guide that is included.  The Nutrition Guide is a book similar to the one that defines the exercises (about 100 glossy pages) though it details foods you should eat, the amounts, and a number of healthy (and tasty!) recipes.  The guide is split up into 3 phases as well, promoting high protein and low carb/dairy at during Phase 1, and levelling off through to Phase 3 where you have a relatively balanced amount of every food group. So after 1 week where am I?  I've stuck quite close to the nutrition guide (there isn't 'diet food' in here people, it's ACTUALLY food) and done my exercise every day.  I think a lot of the first week is getting into the whole idea and learning the moves performed on the DVD.  Have I lost weight?  No.  Do I feel some definition already starting to poke out?  Absolutely (no pun intended). Tony Horton (the 51-year old hulk that runs the whole thing) is very fun to listen and work along with and the 'diet' really isn't too hard to follow unless all you eat is carbs.  I've tried the gym thing and could not get motivated enough to continue going.  P90X is the first time I've ached from a workout, BEFORE starting my next workout.  For anyone interested, Google 'P90X' or 'BeachBody' to find out more information about this awesome program!

    Read the article

  • InfoPath 2010 Form Design and Web Part Deployment

    - by JKenderdine
    In January I had the pleasure to speak at SharePoint Saturday Virginia Beach.  I presented a session on InfoPath 2010 forms design which included some of the basics of Forms Design, description of some of the new options with InfoPath 2010 and SharePoint 2010, and other integration possibilities.  Included below is the information presented as well as the solution to create the demo: First thing you need to understand is what the difference is between an InfoPath List form and a Form Library Form?  SharePoint List Forms:  Store data directly in a SharePoint list.  Each control (e.g. text box) in the form is bound to a column in the list. SharePoint list forms are directly connected to the list, which means that you don’t have to worry about setting up the publish and submit locations. You also do not have the option for back-end code. Form Library Forms:  Store data in XML files in a SharePoint form library.  This means they are more flexible and you can do more with them.  For example, they can be configured to save drafts and submit to different locations. However, they are more complex to work with and require more decisions to be made during configuration.  You do have the option of back-end code with these type of forms. Next steps: You need to create your File Architecture PlanPlan the location for the saved template – both Test and Production (This is pretty much a given, but just in case - Always make sure to have a test environment) Plan for the location of the published template Then you need to document your Form Template Design Plan.  Some questions to ask to gather your requirements: What will the form be designed to do? Will it gather user information? Will it display data from a data source? Do we need to show different views to different users? What do we base this on? How will it be implemented for the users? Browser or Client based form Site collection content type – Published through Central Admin Form Library – Published directly to form library So what are the requirements for this template?  Business Card Request Form Template Design Plan Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Published directly to form library The form was published through Central Administration and incorporated into the site as a content type. Utilizing the new InfoPath Web part, the form is integrated into the page and the users can complete the form directly from within that page. For now, if you are interested in the final form XSN, contact me using the Contact link above.   I will post soon with the details on how the form was created and how it integrated the requirements detailed above.

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • List of freely available programming books

    - by Karan Bhangui
    I'm trying to amass a list of programming books with opensource licenses, like Creative Commons, GPL, etc. The books can be about a particular programming language or about computers in general. Hoping you guys could help: Languages BASH Advanced Bash-Scripting Guide (An in-depth exploration of the art of shell scripting) C The C book C++ Thinking in C++ C++ Annotations How to Think Like a Computer Scientist C# .NET Book Zero: What the C or C++ Programmer Needs to Know About C# and the .NET Framework Illustrated C# 2008 (Dead Link) Data Structures and Algorithms with Object-Oriented Design Patterns in C# Threading in C# Common Lisp Practical Common Lisp On Lisp Java Thinking in Java How to Think Like a Computer Scientist Java Thin-Client Programming JavaScript Eloquent JavaScript Haskell Real world Haskell Learn You a Haskell for Great Good! Objective-C The Objective-C Programming Language Perl Extreme Perl (license not specified - home page is saying "freely available") The Mason Book (Open Publication License) Practical mod_perl (CreativeCommons Attribution Share-Alike License) Higher-Order Perl Learning Perl the Hard Way PHP Practical PHP Programming Zend Framework: Survive the Deep End PowerShell Mastering PowerShell Prolog Building Expert Systems in Prolog Adventure in Prolog Prolog Programming A First Course Logic, Programming and Prolog (2ed) Introduction to Prolog for Mathematicians Learn Prolog Now! Natural Language Processing Techniques in Prolog Python Dive Into Python Dive Into Python 3 How to Think Like a Computer Scientist A Byte of Python Python for Fun Invent Your Own Computer Games With Python Ruby Why's (Poignant) Guide to Ruby Programming Ruby - The Pragmatic Programmer's Guide Mr. Neighborly's Humble Little Ruby Book SQL Practical PostgreSQL x86 assembly Paul Carter's tutorial Lua Programming In Lua (for v5 but still largely relevant) Algorithms and Data Structures Algorithms Data Structures and Algorithms with Object-Oriented Design Patterns in Java Planning Algorithms Frameworks/Projects The Django Book The Pylons Book Introduction to Design Patterns in C++ with Qt 4 (Open Publication License) Version control The SVN Book Mercurial: The Definitive Guide Pro Git UNIX / Linux The Art of Unix Programming Linux Device Drivers, Third Edition Others Structure and Interpretation of Computer Programs The Little Book of Semaphores Mathematical Logic - an Introduction An Introduction to the Theory of Computation Developers Developers Developers Developers Linkers and loaders Beej's Guide to Network Programming Maven: The Definitive Guide I will expand on this list as I get comments or when I think of more :D Related: Programming texts and reference material for my Kindle What are some good free programming books? Can anyone recommend a free software engineering book? Edit: Oh I didn't notice the community wiki feature. Feel free to edit your suggestions right in!

    Read the article

  • QPainter paints garbage

    - by DSblizzard
    Fragment of program code: def add_link(Item0Num, Item1Num): global Mw, View # Mw - MainWindow if Item0Num != Item1Num and not link_exists(Item0Num, Item1Num): append( links_to(Item1Num), Item0Num ) append( links_from(Item0Num), Item1Num ) LinkGi = TLinkGi() Mw.Scene.addItem(LinkGi) LinkGi.setZValue(200) LinkGi.scale(1 / View.Scale, 1 / View.Scale) LinkGi.Item0Num = Item0Num LinkGi.Item1Num = Item1Num class TLinkGi(QGraphicsItem): def paint(self, Painter, StyleOptionGraphicsItem, Widget): global Mw, View Pen = QPen(Qt.black, 1) Painter.setPen(Pen) X0, Y0 = task_center(self.Item0Num) self.setPos(X0, Y0) X1, Y1 = task_center(self.Item1Num) X, Y = int( (X1 - X0) * View.Scale ), int( (Y1 - Y0) * View.Scale ) Painter.drawLine(0, 0, X, Y) #Mw.Scene.update(0, 0, Plan.Size, Plan.Size) # (1) #Mw.gvMain.repaint() # (2) def boundingRect(self): global View Rect = QRectF(0, 0, Plan.Size, Plan.Size) return Rect This paints such garbage: http://img697.imageshack.us/content_round.php?page=done&l=img697/5395/qpaintergarbage1.jpg When lines (1) and (2) are uncommented things doesn't become much better: http://img63.imageshack.us/content_round.php?page=done&l=img63/9693/qpaintergarbage0.jpg Please help me to solve this problem.

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Subsonic linq using activerecord very slow compared to simplerepository

    - by skiik
    Anyone know anything about why linq queries are about 6 times slower when querying using active record vs simplerepository? The below code runs 6 times slower than when i query the data using a simple repository. This code is executed 1000 times in a loop Thanks in advance string ret = ""; // if (plan == null) { plan =VOUCHER_PLAN.SingleOrDefault(x => x.TENDER_TYPE == tenderType); } if (plan == null) throw new InvalidOperationException("voucher type does not exist." + tenderType); seq = plan.VOUCHER_SEQUENCES.First(); int i = seq.CURRENT_NUMBER; seq.CURRENT_NUMBER += seq.STEP; seq.Save();

    Read the article

  • Devise routes /:param not working

    - by Jacob Schatz
    Using devise 2.1.0 I am trying to send the new registration page a PricingPlan model. So in my routes I have: devise_scope :user do delete "/logout" => "devise/sessions#destroy" get "/login" => "devise/sessions#new" get "/signup/:plan" => "devise/registrations#new" end And I override the devise registration controller. With this in my routes.rb to make it work: devise_for :users, :controllers => {:registrations => "registrations"} In my actual Registration controller which overrides Devise's controller I have: class RegistrationsController < Devise::RegistrationsController view_paths = "app/views/devise" def new super @plan = PricingPlan.find_by_name(params[:plan]) end So that the default views still go to devise.... In my new view for the registration controller I call this: <h3>You've chosen the <%= @plan.name %> plan.</h3> And I get this error: undefined method `name' for nil:NilClass Also... in my PricingPlan model: class PricingPlan < ActiveRecord::Base has_many :users And in my User model: class User < ActiveRecord::Base belongs_to :pricing_plan I'm rather new at rails.

    Read the article

  • Conditional Join - join 1 tables 2 ways

    - by Jon H
    I have a set of (not very well normalised or relational) tables named PLAN, GROUP, PRODUCT CLIENT Most have linkage i.e. PLAN - CLIENT on clno GROUP to PRODUCT on PRODCD However, the linkage between PLAN and GROUP is tricky. A plan has 2 field of interest GRPNO and PRODCD. What I want to do is if GRPNO != 0 then join GROUP on GRPNO. However if GRPNO = 0 then I want to join GROUP on PRODCD. The frustrating thing is that the fileds I want to return in my queries are the same across the board I just need to be able to vary the join, or join the same table twice. The best I can come up with is 2 queries and merge them using datasets, or possibly using a union. Is there a nifty way to do this in one select? I should point out I am access Foxpro over ODBC to do this. Thank you!

    Read the article

  • Facebook API: How to let the php knows whether the user is a fan of the application or not?

    - by Unreality
    Facebook API: How to let the php knows whether the user is a fan of the application or not? Plan 1: failed is displayed at the html level only.. .it won't let the php level knows whether the user is a fan or not Plan 2: failed has got the same problem too Plan 3: failed fql permission table simply lacked the field of is_fan http://wiki.developers.facebook.com/index.php/Permissions_%28FQL%29 Plan 4: server lag calling restful API http://wiki.developers.facebook.com/index.php/Pages.isFan will bring a lot of lag to server... and I wonder if it can works on application page too. any suggestion to solve this problem?

    Read the article

  • Can you recommend me a good magento book ?

    - by bgy
    I'm almost ready to use Magento (which is built upon Zend Framework, which i know) and i'm looking for a good book covering setup, config, best practices, creating templates, development, etc. Do you have any to recommend ? I found some which look insteresting : The Definitive Guide to Magento Pro Magento Developer's Guide Php Architect's Guide to E-commerce Programming With Magento Any feedbacks on those one ?

    Read the article

  • iPhone - UIView Animation not looping correctly

    - by Robert
    Hey all, got a little problem here and can't figure out what I am doing wrong. I am trying to animate a UIView up and down repeatedly. When I start it, it goes down correctly and then back up but then immediately shoots to the "final" position of the animation. Code is as follows: UIImageView *guide = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"image.png"]]; guide.frame = CGRectMake(250, 80, 30, 30); [self.view addSubview:guide]; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; [UIView setAnimationDuration:2]; [UIView setAnimationRepeatAutoreverses:YES]; [UIView setAnimationBeginsFromCurrentState:YES]; guide.frame = CGRectMake(250, 300, 30, 30); [UIView setAnimationRepeatCount:10]; [UIView commitAnimations]; Thanks in advance!

    Read the article

  • How should I secure my webapp written using Wicket, Spring, and JPA?

    - by Martin
    So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations. Here are the options so far: Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old. Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of @Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory. SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example. Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.

    Read the article

  • How to expose a function that takes two input files as a REST resource?

    - by dafmetal
    I need to expose a function, let's say compute that takes two input files: a plan file and a system file. The compute function uses to system file to see whether the plan in the plan file can be executed or not. It produces an output file containing the result of this check including recommendations for the plan. I need to expose this functionality in a REST architecture and have no influence on the compute function itself (it is being developed by another organization). I can control the interface through which it is accessed. What would be a recommended way to expose this functionality in a REST architecture?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >