Search Results

Search found 34532 results on 1382 pages for 'all different'.

Page 245/1382 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Is It Time To Specialize?

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/18/is-it-time-to-specialize.aspx Over my career I have made a living as a generalist.  I have been a jack of all trades and a master of none.  It has served me well in that I am able to move from one technology to the other quickly and make myself productive.  Where it becomes a problem is deep knowledge.  I am constantly digging for the things that aren’t basic knowledge.  How do you make a product like WCF or Windows RT do more than just “Hello World”? As an architect I need to be a jack of all trades.  This is what helps me to bring the big picture of a project into focus for developers with different skills to accomplish the goals of the project. It is a key when the mix technologies crosses Windows, Unix and Mainframe with different languages and databases.  The larger the company that the project is for the more likely this scenario will arise. As a consultant and a developer I need to have specialized skills in order to get the job done efficiently.  if I have a SharePoint or Windows Phone project knowing the object model details and possible roadblocks of the technology allow me to stay within budgets as well as better advise the client on technology decisions. What is the solution?  Constant learning and associating with developers who specialize in a variety of technologies is the best thing you can do.  You may have thought you were done with classes when you left college, but in this industry you need to constantly be learning new products and languages.  The ultimate answer is you must generally specialize.  Learn as many subject areas as possible, but go deep when ever you can.  Sleep is overrated.  Good luck. del.icio.us Tags: software development,software architecture,specialization,generalist

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Moving from windows to linux

    - by rincewind
    I need to reconcile these 2 facts: I don't feel comfortable working on Linux; I need to develop software for Linux. Some background: I have a 10+ years of programming experience on Windows (almost exclusively C/C++, but some .NET as well), I was a user of FreeBSD at home for about 3 years or so (then had to go back to Windows), and I've never had much luck with Linux. And now I have to develop software for Linux. I need a plan. On Windows, you can get away with just knowing a programming language, an API you're coding against, your IDE (VisualStudio) and some very basic tools for troubleshooting (Depends, ProcessExplorer, DebugView, WinDbg). Everything else comes naturally. On Linux, it's a very different story. How the hell would I know what DLL (sorry, Shared Object) would load, if I link to it from Firefox plugin? What's the Linux equivalent of inserting __asm int 3/DebugBreak() in the source and running the program, and then letting the OS call a debugger? Why the hell release builds use something, called appLoader, while debug builds work somehow different? Worst of all: how to provision Linux development environment? So, taking into account that hatred is usually associated with not knowing enough, what would you recommend? I'm ok with Emacs and GCC. I need to educate myself as a Linux admin/user, and I need to learn proper troubleshooting tools (strace is cool, btw), equivalents to the ones I mentioned above. Do I need to do Linux From Scratch? Or do I need to just read some books (I've read "UNIX programming enviornment" by Kernighan and "Advanced Programming..." by Stevens, but I need to learn something more practical)? Or do I need to have some Linux distro on my home computer?

    Read the article

  • WCF Versioning, Naming and Endpoint URL

    - by Vinothkumar VJ
    I have a WCF Service and a Main Lib1. Say, I have a Save Profile Service. WCF gets data (with predefined data contract) from client and pass the same to the Main Class Lib1, generate response and send it back to client. WCF Method : SaveProfile(ProfileDTO profile) Current Version 1.0 ProfileDTO have the following UserName Password FirstName DOB (In string yyyy-mm-dd) CreatedDate (In string yyyy-mm-dd) Next Version (V2.0) ProfileDTO have the following UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) Version 3.0 ProfileDTO have the following (With change in UserName and Password length validation) UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) In simple we have DataContract and Workflow change between each version 1. How do I name the methods in WCF Service and Main Class Lib1? 2. Do I have to go with any specific pattern for ease development and maintenance? 3. Do I have to have different endpoints for different version? In the above example I have a method named “SaveProfile”. Do I have to name the methods like “SaveProfile1.0”, “SaveProfile2.0”, etc. If that is the case when there is no change between Version “3.0” and “4.0” then there will difficult in maintenance. I’m looking for a approach that will help in ease maintenance

    Read the article

  • How to set acpi=off for installation?

    - by Johan Helgø
    I'm trying to install Ubuntu 12.04 LTS 64bit from a USB created in Windows using the recommended software from pendrivelinux. The USB is tested on two laptops and works fine. The install does not work on my desktop PC, it either hangs with lots of text on the screen, or it just gives me a screen with garbled colors. Right now it hangs on "[3.925311] Initializing USB Mass Storage driver...", but it hangs on different spots every time I try. I know that acpi=off will fix this. On older versions, there used to be a pink screen with the icon of a keyboard, at that screen I could press ESC, and I would then get a long list of languages, I could choose English, and then I would get options for setting acpi=off, then install without any problems. This new install menu does not have those options. The (now black and white) "Installer boot menu"'s advanced options is blank. There simply are no advanced options. If I go to help, then press F6, I get to a help screen describing the different boot parameters. I see examples of writing "Install acpi=off" However, writing "install acpi=off" it gives an error saying "could not find kernel image: install". How can I install Ubuntu 12.04 with acpi=off? (Oh, btw, nothing happens if I press F6 in the "Installer boot menu")

    Read the article

  • Java issues on OpenVZ Ubuntu 11.04 (.jar/.sh files)

    - by IWillNotChange
    I've had a whole line of messes with java and .jar files. I've tried both OpenJDK (from software installer) and about three repositories for Sun. /Desktop# java -jar -Xmx1024m ss.jar Exception in thread "main" java.awt.HeadlessException at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:173) at java.awt.Window.<init>(Window.java:476) at java.awt.Frame.<init>(Frame.java:419) at java.awt.Frame.<init>(Frame.java:384) at javax.swing.JFrame.<init>(JFrame.java:174) at org.powerbot.bd.<init>(Unknown Source) at org.powerbot.Boot.main(Unknown Source) Two separate errors: ~/Desktop# ./ss.sh [SEVERE] org.server.Boot: Default heap size of 490m too small, restarting with 768m and about 30 different crashes were it just "aborts" with a huge file dump. Each time I've tried something a little different, whether it be updating Java or just changing -Xmx1024 to -Xmx1024m to get rid of the heap. Personally I think it has something to do with OpenVZ, but Google hasn't saved me this time, I need someone who can get to the bottom of my problem. java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) is my current install. Running ss.sh gives me: (I'd post the entire log but its long) # # A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x00002b14278e6fa0, pid=9301, tid=47365590714112 # # JRE version: 6.0_26-b03 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.1-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [ld-linux-x86-64.so.2+0x14fa0] _dl_make_stack_executable+0x2b50 # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # I'm willing to let someone who knows what they are talking about view it and try and sort this out. Any help would be appreciated, I've about pulled all my hair Googling to no avail.

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • Debugging/Logging Techniques for End Users

    - by James Burgess
    I searched a bit, but didn't find anything particularly pertinent to my problem - so please do excuse me if I missed something! A few months back I inherited the source to a fairly-popular indie game project and have been working, along with another developer, on the code-base. We recently made our first release since taking over the development but we're a little stuck. A few users are experiencing slowdowns/lagging in the current version, as compared to the previous version, and we are not able to reproduce these issues in any of our various development environments (debug, release, different OSes, different machines, etc.). What I'd like to know is how can we go about implementing some form of logging/debugging mechanism into the game, that users can enable and send the reports to us for examination? We're not able to distribute debug binaries using the MSVS 2010 runtimes, due to the licensing - and wouldn't want to, for a variety of reasons. We'd really like to get to the bottom of this issue, even if just to find out it's nothing to do with our code base but everything to do with their system configuration. At the moment, we just have no leads - and the community isn't a very technically-savvy one, so we're unable to rely on 'expert' bug reports or investigations. I've seen the debug logging mechanism used in other applications and games for everything from logging simple errors to crash dumps. We're really at a loss at this stage as to how to address these issues, having been over every commit to the repository from the previous to the current version and not finding any real issues.

    Read the article

  • Issuse with multiple student result printing request

    - by dotman14
    I'm not really sure about how to ask this question but i'll try my best to make it clear enough. I have a Student Result Application where by students result are managed, over several academic sessions, all having two semesters each. During each semester students take different courses and have results based on the semester. The application is done now and i'm using a PDF Libaray to crop the final result page to hand over to the students each semester. If a student request a particular semester result it's a straight forward issue and there are no complications when it comes to printing out the result. My issue is this: in a case where a student requests to have a combination of semesters...say 3rd year rain semester , 4th year rain semester and 5th year hamattarn semester. Please how can i handle this issue? Does the user picks these options at the user interface level or there's a special way to handle issues like this? Also, if i'm to display these multiple student result how could this be be done, knowing fully well that i'll have to print the different result seperately. Hopefully i've being able to make my situation clear enough. Thanks for your time and patience. Expecting your comments and answers. Thanks.

    Read the article

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

  • Developing a Configurable Pricing Program

    - by Ben DeMott
    The organization I work at has some interesting requirements when it comes to pricing for online commerce. Currently the developers write different 'pricing rules' and those rules can be applied to our items based on attributes of the items. For Example: INPUTS: [cost, sug_retail, discontinued, warehouse_qty, orderable_qty, brand, type, days_available, shipping_rate, weight, map_protected, map_discount] MATCH: brand=x, warehouse_qty 1, discontinued=True, map_protected=False SET: retail_price = (sug_retail * 0.95), offer_price1 = (cost * 1.25 + shipping_rate) I am looking to allow the merchandising team to have more control over the pricing and formulas - they are afterall technical enough to write excel formulas. I've been looking at writing a desktop application that uses something like numexpr http://code.google.com/p/numexpr/ or http://sympy.org/en/index.html to allow non-programmers to integrate their own logic into our pricing backend. We have multiple price-tiers we have to set, for multiple markets, so an elegant solution is needed. It's getting frustrating for the dev team to continually tweak/manage all of the pricing rules (we sell over 200 brands in 3 markets). My question is; does this seem like a decent approach? Can you think of a better way to parse string-mathematical-grammer? Can you think of a different way for users to provide formula's to integrate into a automated pricing system? Does anyone know of any examples of existing applications that do this? Excel, and Access are out of the question - the volume of data we manipulate has already proven the need to automate it - now we just need some visibility into that automation.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Fonts look bad in Microsoft Office using Wine

    - by amfcosta
    Office fonts in wine look very different from what they look in Windows or LibreOffice. As can be seen from the attached screenshots, they look blurry in some sizes and aliased in other sizes. You can see the differences not only in the document text but also in the ribbon menu. It happens with a lot of fonts. I'm testing it with Office 2010 now, but it also happens in Office 2007. Things I've tried: Changing fontsmooth settings with winetricks - made no difference. Copying fonts from a Windows system - made no difference. Using Ubuntu's fonts (by removing the Windows/Fonts from the wineprefix) - removed the blurriness in some fonts but increased aliasing. The three screenshots correspond to different "configurations": office_wine.png - Office Word in Wine using Wine's original fonts; office_nowinefonts.png - Office Word in Wine using Ubuntu's fonts; office_windows.png - Office Word in Windows. PS: please make sure to see the screenshots without scaling them to notice the problem. EDIT: A screenshot of how Calibri behaves in Wine here.

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Alternative to Game State System?

    - by Ricket
    As far as I can tell, most games have some sort of "game state system" which switches between the different game states; these might be things like "Intro", "MainMenu", "CharacterSelect", "Loading", and "Game". On the one hand, it totally makes sense to separate these into a state system. After all, they are disparate and would otherwise need to be in a large switch statement, which is obviously messy; and they certainly are well represented by a state system. But at the same time, I look at the "Game" state and wonder if there's something wrong about this state system approach. Because it's like the elephant in the room; it's HUGE and obvious but nobody questions the game state system approach. It seems silly to me that "Game" is put on the same level as "Main Menu". Yet there isn't a way to break up the "Game" state. Is a game state system the best way to go? Is there some different, better technique to managing, well, the "game state"? Is it okay to have an intro state which draws a movie and listens for enter, and then a loading state which loops on the resource manager, and then the game state which does practically everything? Doesn't this seem sort of unbalanced to you, too? Am I missing something?

    Read the article

  • Defaults for Exporting Data in Oracle SQL Developer

    - by thatjeffsmith
    I was testing a reported bug in SQL Developer today – so the bug I was looking for wasn’t there (YES!) but I found a different one (NO!) – and I was getting frustrated by having to check the same boxes over and over again. What I wanted was INSERT STATEMENTS to the CLIPBOARD. Not what I want! I’m always doing the same thing, over and over again. And I never go to FILE – that’s too permanent for my type of work. I either want stuff to the clipboard or to the worksheet. Surely there’s a way to tell SQL Developer how to behave? Oh yeah, check the preferences So you can set the defaults for this dialog. Go to: Tools – Preferences – Database – Utilities – Export Now I will always start with ‘INSERT’ and ‘Clipboard’ – woohoo! Now, I can also go INTO the preferences for each of the different formats to save me a few more clicks. I prefer pointy hats (^) for my delimiters, don’t you? So, spend a few minutes and set each of these to what you’re normally doing and save yourself a bunch of time going forward.

    Read the article

  • Keeping Aspect Screen Ration While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Need to combine a color, mask, and sprite layer in a shader

    - by Donutz
    My task: to display a sprite using different team colors. I have a sprte graphic, part of which has to be displayed as a team color. The color isn't 'flat', i.e. it shades from brighter to darker. I can't "pre-build" the graphics because there are just too many, so I have to generate them at runtime. I've decided to use a shader, and supply it with a texture consisting of the team color, a texture consisting of a mask (black=no color, white=full color, gray=progressively dimmed color), and the sprite grapic, with the areas where the team color shows being transparent. So here's my shader code: // Effect attempts to merge a color layer, a mask layer, and a sprite layer // to produce a complete sprite sampler UnitSampler : register(s0); // the unit sampler MaskSampler : register(s1); // the mask sampler ColorSampler : register(s2); // the color float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex1 = tex2D(ColorSampler, texCoord); // get the color float4 tex2 = tex2D(MaskSampler, texCoord); // get the mask float4 tex3 = tex2D(UnitSampler,texCoord); // get the unit float4 tex4 = tex1 * tex2.r * tex3; // color * mask * unit return tex4; } My problem is the calculations involving tex1 through tex4. I don't really understand how the manipulations work, so I'm just thrashing around, producing lots of different incorrect effects. So given tex1 through tex3, what calcs do I do in order to take the color (tex1), mask it (tex2), and apply the result to the unit if it's not zero? And would I be better off to make the mask just on/off (white/black) and put the color shading in the unit graphic?

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • How to evaluate a user against optimal performance?

    - by Alex K
    I have trouble coming up with a system of assigning a rating to player's performance. Well, technically there is is a trivial rating system, but I don't like it because it would mean assigning negative scores, which I think most players will be discouraged by. The problem is that I only know the ideal number of actions to get the desired result. The worst case is infinite number of actions, so there is no obvious scale. The trivial way I referred to above is to take score = (#optimal-moves - #players-moves), with ideal score being zero. However, psychologically people like big numbers. No one wants to win by getting a mark of 0. I wonder if there is a system that someone else has come up with before to solve this problem? Essentially I wish to score the players based on: How close they've come to the ideal solution. Different challenges will have different optimal number of actions, so the scoring system needs to take that into account, e.g. Challenge 1 - max 10 points, Challenge 2 - max 20 points. I don't mind giving the players negative scores if they've performed exceptionally badly, I just don't want all scores to be <=0

    Read the article

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • Iphone/Android app – chatroom development – what framework & hosting needs?

    - by MikaelW
    I have some experience regarding IPhone and Android development but I am now struggling to solve a new class of problem: apps that involve a client/server chatroom feature. That is, an app when people can exchange text over the internet, and without having the app to constantly “pull” content from the server. So that problem can’t be solved with a normal php/mysql website, there must be some kind of application running on a server that is able to send message from the server to the phone, rather than having the phone to check for new messages every 10 seconds… So I’m looking for ways to solve the different problems here: What framework should I use on the two sides (phone / server)? It should be some kind of library that doesn’t prevent me to write paid apps. It should also be possible to have the same server for the Iphone and android version of the app. What server / hosting solution do I need with what sort of features, I just have no experience regarding server application that can handle and initiate multiple connections and are hosted on hardware that is always online I tried to find resources online but couldn’t so far, either the libraries had the wrong kind of license/language or I just didn’t understand… Sometimes there were nice tutorial but for different needs such as peer2peer chat over local network… Same with the server and the hosting problem, not sure where to start really, I’m calling for help and I promise I will complete this page with notes about the experience I will get :-) Obviously the ideal would be to find a tutorial I missed that include client code, server code and a free scalable server… That being said, If I see something as good, it probably means that I have eaten the wrong kind of mushroom again… So, failing that, any pointer which might help me toward that quest, would be greatly appreciated. Thanks in advance. Mikael

    Read the article

  • Haskell: Best tools to validate textual input?

    - by Ana
    In Haskell, there are a few different options to "parsing text". I know of Alex & Happy, Parsec and Attoparsec. Probably some others. I'd like to put together a library where the user can input pieces of a URL (scheme e.g. HTTP, hostname, username, port, path, query, etc.) I'd like to validate the pieces according to the ABNF specified in RFC 3986. In other words, I'd like to put together a set of functions such as: validateScheme :: String -> Bool validateUsername :: String -> Bool validatePassword :: String -> Bool validateAuthority :: String -> Bool validatePath :: String -> Bool validateQuery :: String -> Bool What is the most appropriate tool to use to write these functions? Alex's regexps is very concise, but it's a tokenizer and doesn't straightforwardly allow you to parse using specific rules, so it's not quite what I'm looking for, but perhaps it can be wrangled into doing this easily. I've written Parsec code that does some of the above, but it looks very different from the original ABNF and unnecessarily long. So, there must be an easier and/or more appropriate way. Recommendations?

    Read the article

  • Branching strategy for frequent releases

    - by Technext
    We have very frequent releases and we use Git for version control. When i am mentioning about frequency, please assume it to include bug-fixes and feature release too. All releases are eventually merged into ‘mainline’. When a release is deployed on production and if a bug is identified, people start fixing the bug on the same branch from which the latest release was deployed on production. They do not create a new bug-fix branch for the same. I feel that’s not the right way to go for. There are several components and each component has a different owner, and thus, different perspective. Though I have not initiated talks with them, I am sure there will be a lot of resistance. Main issue that they might cite would be, “There’s a lot of work involved in creating and tracking branches especially when there are so frequent deployments on production. This will consume a lot of dev effort.” Do you think that fixing bug on the same branch from which release was done, a good idea? If yes, how do you manage it? Using tags? I know that best practices may not always be applicable due to several factors but still I would like to know what might be a good approach for branching in a scenario where releases/bug-fixes happen almost on a daily basis.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >