Search Results

Search found 28279 results on 1132 pages for 'syntax case'.

Page 26/1132 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Using WebStorm for Razor Syntax MVC

    - by Jay Stevens
    I am building a lot of client-side heavy SPA-like apps with VS2010 and MVC3/4. VS2010 Javascript/HTML/CSS editing (mostly javascript) is interminably slow and sluggish. I'd love to use something like JetBrains' WebStorm to edit my .CSHTML files (with embedded javascript, etc. because I am using RAzor to pop in URL names, etc.) WebStorm seems to have all of the things I want.. better language recognition ("intellisense") and the ability to integrate additional outside libraries into this (I'm using Kendo), etc. Is this possible? How do you get WebStorm to recognize the @"" invoked Razor language inserts? Any help or suggestions would be appreciated.

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • tail stops displaying in case of a log rotation

    - by Rudy Vissers
    I have to tail the log of a server (servicemix) and the log rotation is enabled. As soon as the rotation happens, tail stops displaying. I did some investigations and it is a bug in Debian : Debian Bug Report. The bug has been around for a long time ago. Does anyone knows if this bug in Ubuntu is to be fixed? I'm on Ubuntu 12.04 64 bit. I don't have to mention that this bug is total hell! Every time I have the problem, I have to interrupt the command tail and re-execute the command!

    Read the article

  • How can I solve this SAT edge case?

    - by ssb
    I have an SAT implementation that basically works, and the fact that it works is what's giving me a few headaches. Basically there are some situations where using the SAT doesn't quite give me my intended result. One of these involves movement across multiple collision objects. Or to put it another way, if I have several collision boxes lined up next to each other such as to create something like a wall or a floor, movement along that surface while constantly applying force into that surface sometimes causes hangups, i.e. the player stops moving. This illustration shows what I mean: The 2 boxes on the bottom represent a floor, and the box on top/in the middle represents what my player is doing. There are several squares lined up as world obstacles to create some kind of wall, and if I move to the left across this surface while holding the down key then the issue arises. It only happens at the exact dividing point between two blocks. It only happens when moving to the left. At any rate I think I know why it happens, but I don't know how to solve it. Basically when I update my player movement I consider which directions are pressed, naturally, so if down is pressed I will add the speed to the Y component, and so on. But due to the way my SAT is implemented, when the penetration into the shape is the same from both sides it just goes with the smallest axis that it finds first, and it checks collisions against objects in the order that they were created because it goes through a foreach loop on the list of collidable objects. So this all adds up to the effect of if I'm moving to the left over a series of boxes while holding down, it will resolve me back to the right out of the first box and then up out of the box to the right of it, and this continues as long as the penetration is the same. The odd part is that this doesn't happen every time, which I am going to attribute to some oddity regarding multiplying velocity by the game time and causing some minor discrepancies between the lengths. Ultimately what this boils down to is that it will keep resolving me to the right and up, but this is technically expected behavior. All the solutions I can think of only address the symptoms of this problem and not the actual cause, such as not using many blocks to create walls or shapes, which is an option I'd like to keep open. I could also change which axis my algorithm defaults to, but that would just cause problems when going up/down along the walls. What can I do to fix this?

    Read the article

  • The Case for Complimentary Software Copies

    - by GGBlogger
    As the Geriatric Geek you can understand that I’ve been writing and studying for over 60 years. That means that I’ve seen insane changes in the computer software industry. I’ve made the joke that I get a new college education every 6 months or so. Of course that’s an exaggeration but it doesn’t make the feeling go away. I have a long standing and strong relationship with Microsoft so I’m armed with virtually every tool they make. It also means that I have access to tons of training material. But here’s the rub… Last year I started a definitive read of Professional Visual Basic 2008. The purpose was to fill in holes in my understanding of various things. I’m currently on page 1119 of some 1400 pages. During this sojourn I’ve decided that the future is web related which is to say that the future of “thick client” applications running as Windows applications is likely to slowly disappear. To that end I’ve taken a side trip or two into the world of ASP (including XML), Silverlight and cloud development. After carefully avoiding (that’s tongue in cheek) XML for years I finally had to bite the bullet, so to speak, and start learning XML in earnest. The most recent result of that was trail downloads of Altova’s MissionKit 2010 for Software Architects and Liquid Technologies Liquid XML Studio Developer Edition. These are both beautiful products and I want to learn them and write about them. Now comes the rub… While 30 day evaluations are generous in allowing casual users to assess these technologies for purchase they are NOT long enough to allow an author to evaluate, learn and ultimately write about them. Even if I devoted the full 30 days to learning, using and writing about say Altova’s suite I wouldn’t have enough time. Liquid XML may be a little easier to learn (one product as opposed to 8).  Add to that the fact that I frequently get sidetracked to add to my kit and it really blows out. It can be extremely frustrating when I’ve devoted hours to a project and suddenly discover that to complete it I will either need to purchase a license or abandon the project. Since my life blood does not depend on the product I end up abandoning the project and moving on. So to the folks from whom I request complimentary copies… I guarantee that if I convert your product to doing paid development work I will purchase a license to do that but as long as I am using your product to study for the purpose of writing samples, teaching use or otherwise promoting your product to other paying customers I will ask that you give me a license so that I can do that without facing the dread expiration of a 30 day trial.

    Read the article

  • Should programming languages be strict or loose?

    - by Ralph
    In Python and JavaScript, semi-colons are optional. In PHP, quotes around array-keys are optional ($_GET[key] vs $_GET['key']), although if you omit them it will first look for a constant by that name. It also allows 2 different styles for blocks (colon, or brace delimited). I'm creating a programming language now, and I'm trying to decide how strict I should make it. There are a lot of cases where extra characters aren't really necessary and can be unambiguously interpreted due to priorities, but I'm wondering if I should still enforce them or not to encourage good programming habits. What do you think?

    Read the article

  • What is a good use case for scala?

    - by Usman Ismail
    In a current project we have setup the build so that we could mix Java and Scala. I would like to use more Scala in our code base to make the code more readable and concise. In the process also learn the language by handing over real features. So I plan to use Scala for some classes to showcase its benefits and convince other devs to look into using Scala too. For a rest based web server or a program in general what kind of code structures lend themselves to Scala's functional programming style.

    Read the article

  • Sprint Says Business Case for 4G Is Growing

    Sprint says its 4G service is improving apps that ran adequately at 3G speeds while opening up previously unattainable possibilities for businesses and organizations as diverse as a Chicago food bank and the Portland, Oregon Police Bureau.

    Read the article

  • A Definite Case of Mobile Phone Addiction [Comic]

    - by Asian Angel
    Perhaps it is time to set the phone down and look up toward the sun once again… Note: You can view the full-size version of the comic by visiting the link below. Catch up – Sean McLean (Underwhelmed Comic Blog) [via Neatorama] Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • Eliminating Downtime During Database Upgrades: A Customer Case Study

    - by irem.radzik(at)oracle.com
    Planned outages, such as database, OS, hardware upgrades and migrations, are a fact of life. Even though they are "planned" and many of them are performed during "off business hours", they can still interrupt operations-- especially for global operations and online businesses. For this reason many IT organizations postpone these critical infrastructure improvement projects, which in turn result in delays in advancing business operations. This week, on Thursday January 13th, we will host a free webcast on this topic, and will feature Oracle GoldenGate's customer Atmos Energy. Atmos Energy implemented Oracle GoldenGate for eliminating downtime during their database upgrade from Oracle Database 8.1.7 to Oracle Database 11.1.0.7. Jos Francis, Lead DBA for Atmos, and Ronald Nedd, Sr. DBA for Atmos, will be presenting their database upgrade project and their solution architecture. Join us at this live webcast and hear from our customer and product management how to eliminate planned outages with Oracle GoldenGate's real-time, heterogeneous data replication capabilities.

    Read the article

  • n00b needs some PHP syntax guidance [closed]

    - by Michael
    If you look at http://www.cruc.es/?paged=12/ and go to the bottom of the page you'll see the bottom navigation with the next and previous options. I've been able to make the page numbers work by changing page to paged= in the code. I don't know enough about PHP to get the previous/next options to work. Any advice would be appreciated and I've pasted the code below. Thank you: n00b if ( $query->found_posts > $query->query_vars["posts_per_page"] ) { echo '<ul class="paging">'; // Previous link? if ( $page > 1 ) { echo '<li class="previous"><a href="'.$baseURL.'/page/'.($page-1).'/'.$qs.'">previous</a></li>'; } // Loop through pages for ( $i=1; $i <= $query->max_num_pages; $i++ ) { // Current page or linked page? if ( $i == $page ) { echo '<li class="active">'.$i.'</li>'; } else { echo '<li><a href="'.$baseURL.'/?paged='.$i.'/'.$qs.'">'.$i.'</a></li>'; } } // Next link? if ( $page < $query->max_num_pages ) { echo '<li><a href="'.$baseURL.'/page/'.($page+1).'/'.$qs.'">next</a></li>'; } echo '</ul>'; }

    Read the article

  • Business case for decentralized version control systems

    - by Keyo
    I searched and couldn't find any business reasons why git/mercurial/bazzr systems are better than centralized systems (subversion, perforce). If you were trying to sell a DVCS to a non-technical person what arguments would you provide for the DVCS increasing profit. I will shortly be pitching git to my manager, it will take some time converting out subversion repositories and some expense in buying smartgit licences.

    Read the article

  • What defines a language as a scripting language? [closed]

    - by Mathew Foscarini
    Possible Duplicate: What is the main difference between Scripting Languages and Programming Languages? I'd like to know what defines a language as a scripting language compared against other programming languages. Some possible scripting languages might include AutoCad LISP, Linux Bash, DOS Batch, Javascript or ActionScript in Flash. Where is the distinction made that makes a language a scripting language? Are there a set of clearly define rules to classify it as such?

    Read the article

  • Is `break` and `continue` bad programming practice?

    - by Mikhail
    My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) { if (object->value < 0) return false; if (object->value > object->max_value) return false; if (object->name == "") return false; ... } The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) { if (loop_count > 1000) break; if (time_exect > 3600) break; if (this->data == "undefined") continue; if (this->skip == true) continue; ... } I think this makes it easier to read & debug; but I also don't see a downside. Please comment.

    Read the article

  • Kernel error during upgrade due to "/etc/default/grub: Syntax error: newline unexpected"

    - by Patrick - Developer
    Summary: linux-image-3.5.0-2-generic upgrade to linux-image-3.5.0-3-generic The default Ubuntu 12.04 update is generating the following error for weeks (the link below). Obs.: I'm using default update of Ubuntu 12.04 ie, apt-get update. log error: https://gist.github.com/3036775 Overall he is trying to do the following: upgrade the "linux-image-3.5.0-2-generic upgrade to linux-image-3.5.0-3-generic" and the error always, always. What to do?

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • Purchasing Laptop Case Online

    "Laptops are meant to be carried around but to achieve the ultimate ease of carrying it from one place to another and to protect the computer as well as precious information on it you need a quality ... [Author: Jeremy Mezzi - Computers and Internet - May 29, 2010]

    Read the article

  • The case against INFORMATION_SCHEMA views

    - by AaronBertrand
    In SQL Server 2000, INFORMATION_SCHEMA was the way I derived all of my metadata information - table names, procedure names, column names and data types, relationships... the list goes on and on. I used the system tables like sysindexes from time to time, but I tried to stay away from them when I could. In SQL Server 2005, this all changed with the introduction of catalog views. For one thing, they're a lot easier to type. sys.tables vs. INFORMATION_SCHEMA.TABLES? Come on; no contest there - even...(read more)

    Read the article

  • Random MongoDb Syntax: Updates

    - by Liam McLennan
    I have a MongoDb collection called tweets. Each document has a property system_classification. If the value of system_classification is ‘+’ I want to change it to ‘positive’. For a regular relational database the query would be: update tweets set system_classification = 'positive' where system_classification = '+' the MongoDb equivalent is: db.tweets.update({system_classification: '+'}, {$set: {system_classification:'positive'}}, false, true) Parameter Description { system_classification: '+' } the first parameter identifies the documents to select { $set: { system_classification: 'positive' } } the second parameter is an operation ($set) and the parameter to that operation {system_classification: ‘positive’} false the third parameter indicates if this is a regular update or an upsert (true for upsert) true the final parameter indicates if the operation should be applied to all selected documents (or just the first)

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >