Search Results

Search found 10330 results on 414 pages for 'real estate'.

Page 127/414 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to show other characters in online 2D rpg

    - by Loligans
    I have Player 1 and Player 2 I am using Json to send and retrieve player data between the client and the server, but when another player logs in, and is in the same map, how would I send that data to both players to update the graphics engine to show there are 2 Players on the map? About my game it is a 2D RPG tile based game it is 24x15 Tiles it is Real time Action it should interact anywhere between 10-150 ping players interact with each other when in the same map and can see each other moving around the game world is persistent, and is saved when the server shuts down Right now the server just sends the player Only their information which is inside a Json Object Here is an example of what I am talking about If you notice there are 2 separate characters in 2 separate clients, but they are running on the same server. I am trying to get them to show up on both clients, but I don't know how I should accomplish this. Should I send it as an added value in the Json object? Also what is the name of this process so I can look it up and find more info on it?

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • SQL Strings vs. Conditional SQL Statements

    - by Yatrix
    Is there an advantage to piecemealing sql strings together vs conditional sql statements in SQL Server itself? I have only about 10 months of SQL experience, so I could be speaking out of pure ignorance here. Where I work, I see people building entire queries in strings and concatenating strings together depending on conditions. For example: Set @sql = 'Select column1, column2 from Table 1 ' If SomeCondtion @sql = @sql + 'where column3 = ' + @param1 else @sql = @sql + 'where column4 = ' + @param2 That's a real simple example, but what I'm seeing here is multiple joins and huge queries built from strings and then executed. Some of them even write out what's basically a function to execute, including Declare statements, variables, etc. Is there an advantage to doing it this way when you could do it with just conditions in the sql itself? To me, it seems a lot harder to debug, change and even write vs adding cases, if-elses or additional where parameters to branch the query.

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • How much time it needs google webmaster yo generate content keyword if url masking is enabled? [closed]

    - by user1439968
    Possible Duplicate: What is domain “masking” or “cloaking”? Why should it be avoided for a new web site? my real domain is domain.in. But url masking has been enabled and the masked url is domain2.in .. In that case i have added d url bputdoubts.21backlogs.in to google webmaster a week ago but content keyword hasn't been generated. In this case when can I expect to get the content keywords generated ?? And is there a problem for getting visitors from google search if url masking is enabled ?

    Read the article

  • Are there serious companies that don't use version-control and continuous integration? Why?

    - by daramarak
    A colleague of mine was under the impression that our software department was highly advanced, as we used both a build server with continuous integration, and version control software. This did not match my point of view, as I only know of one company I which made serious software and didn't have either. However, my experience is limited to only a handful of companies. Does anyone know of any real company (larger than 3 programmers), which is in the software business and doesn't use these tools? If such a company exists, are there any good reason for them not doing so?

    Read the article

  • South African MVPs deserve their title.

    - by MarkPearl
    Recently I read a post by someone who felt the Microsoft MVP program had failed. My local experience with the MVP program would tend for me to disagree. On Saturday I attended a free Windows Phone 7 event organized by Robert MacLean and Rudi Grobler both of whom are local MVP’s. First of all, kudos to them for organizing the event which included a free lunch and flash stick and had some great content for a free event. Secondly, this is not the first time that either of these two MVP’s have organized events. They are active in the community, present at the majority of local events and are always approachable and give an “honest” opinion. For me, that is what an MVP stands for and at least in my region I feel that the MVP program is a real success.

    Read the article

  • Clarification of "avoid if-else" advice [duplicate]

    - by deviDave
    This question already has an answer here: Elegant ways to handle if(if else) else 21 answers The experts in clean code advise not to use if/else since it's creating an unreadable code. They suggest rather using IF and not to wait till the end of a method without real need. Now, this if/else advice confuses me. Do they say that I should not use if/else at all (!) or to avoid if/else nesting? Also, if they refer to the nesting of if/else, should I not do even a single nesting or I should limit it to max 2 nestings (as some recommends)? When I say single nesting, I mean this: if (...) { if (...) { } else { } } else { } EDIT Also tools like Resharper will suggest reformatting if/else statements. It usually concerts them to if stand-alone statement, and sometimes even to ternary expression.

    Read the article

  • How to advance in my JavaScript skills? [closed]

    - by IlyaD
    I am using javascript for about two years now, and I feel that I can do really basic stuff. I can make some basic algorithms and mostly use jQuery for interactive elements on webpages, and as I need to do more advanced things I get the feeling that my knowledge is lacking. In most cases I find a code, it takes me quite some time to understand it, but I don't understand why it is written as it is. I have no background in computer science, so I'm not sure weather I should go to the basics, or get some advanced javascript book/course. How can I make that jump from using JS for scripting to become a real programmer?

    Read the article

  • Steps to create a solution for a problem

    - by Mr_Green
    I am a trainee. According to my teacher, he says that to solve a problem we should go with steps to solve it like Create Algorithm (optional) Create a Datatable: By analyzing the problem, create main concepts in those problem as columns and the related issues in the main concept as rows. Create a Flowchart based on the Datatable. (when creating flow chart, think that you are in that situation and design it in your brain) By seeing the Flowchart, solve the problem. These steps should always consider by a programmer if he/she wants to become a Software designer (not programmer). Because the above approach gives an efficient way of finding solution to a problem even the problem is small. According to him, this way of approach also works in real time scenario's. My question is: Is this really an efficient way? please share also your thoughts. Keeping beside my question I just want to share some thoughts of my teacher with you who is a good mentor for me.

    Read the article

  • need example sql transaction procedures for sales tracking or financial database [closed]

    - by fa1c0n3r
    hi, i am making a database for an accounting/sales type system similar to a car sales database and would like to make some transactions for the following real world actions salesman creates new product shipped onto floor (itempk, car make, year, price).   salesman changes price.   salesman creates sale entry for product sold (salespk, itemforeignkey, price sold, salesman).   salesman cancels item for removed product.   salesman cancels sale for cancelled sale    the examples i have found online are too generic...like this is a transaction... i would like something resembling what i am trying to do to understand it.  anybody have some good similar or related sql examples i can look at to design these? do people use transactions for sales databases?  or if you have done this kind of sql transaction before could you make an outline for how these could be made?  thanks  my thread so far on stack overflow... http://stackoverflow.com/q/4975484/613799

    Read the article

  • System Slow After Uprading Ubuntu

    - by Aragon N
    i have an ubuntu network machine which has release of 10.04.1 LTS Lucid. on this system i have apache, postgresql and django. for some app. development i have to install php and php-curl... due to being on network, i have exported wmvare machine to internet and firstly i have upgraded system and then install php5 packages on it. After all replacing it with its old place, i have considered that the new system query is some slow according to another. Old system query time : 140 ms New system query time : 9.11 s i have checked /etc/network interface and it seems there is no problem. i have checked /etc/resolv.conf and it seems ok i have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then i have checked time host -t A services.myapp.com and i got real 0m0.355s user 0m0.010s sys 0m0.020s and now what can i have to check for boosting my system as before?

    Read the article

  • Fix Your Broken Organization

    - by Michael Snow
    Simple. Powerful. Proven. Face it, your organization is broken. Customers are not the focus they should be. Processes are running amok. Your intranet is a ghost town. And colleagues wonder why it’s easier to get things done on the Web than at work. What’s the solution?Join us for this Webcast. Christian Finn will talk about three simple, powerful, and proven principles for improving your organization through collaboration. Each principle will be illustrated by real-world examples. Discover: How to dramatically improve workplace collaboration Why improved employee engagement creates better business results What’s the value of a fully engaged customer Time to Fix What’s Broken Register now for this Webcast—the tenth in the Oracle Social Business Thought Leaders Series.

    Read the article

  • Top 10 Transact-SQL Statements a SQL Server DBA Should Know

    Microsoft SQL Server is a feature rich database management system product, with an enormous number of T-SQL commands. With each feature supporting its own list of commands, it can be difficult to remember them all. MAK shares his top 10 T-SQL statements that a DBA should know. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Free Webinar: Filling the Gap in SharePoint Records Management

    - by CatherineRussell
    Webinar: Filling the Gap in SharePoint Records Management Find out how you can solve your challenges with conceptClassifier for SharePoint and leverage SharePoint 2007 and 2010 in this free one hour webinar. This informative webinar will focus on records management in SharePoint and how Concept Searching’s award winning conceptClassifier for SharePoint automatically generates conceptual and descriptor metadata from documents, automatically changes the Content Type, and automatically declares records. Juan J. Celaya, President and CEO of COMPU-DATA International, LLC will share his expertise and experience using the U.S. Army’s Joint Services Records Research Center (JSRRC) as a case study and illustrates how they solved the challenge of processing millions of records to support veteran’s claims using conceptClassifier.    Webinar is on June 23rd from 11:30am – 12:30pm EST and explore real world examples of how to simplify your Records Management processes in SharePoint: http://www.clicktoattend.com/?id=149003

    Read the article

  • How to setup passwordless SSH access for root user

    - by Cerin
    I need to configure a machine so software installation can be automated remotely via SSH. Following the wiki, I was able to setup SSH keys so my user can access the machine without a password, but I still need to manually enter my password when I use sudo, which obviously an automated process shouldn't have to do. Although my /etc/ssh/sshd_config has PermitRootLogin yes, I can't seem to be able to login as root, presumably because it's not a "real" account with a separate password. How do I configure SSH keys, so a process can remotely login as root on Ubuntu?

    Read the article

  • How to explain that writing universally cross-platform C++ code and shipping products for all OSes is not that easy?

    - by sharptooth
    Our company ships a range of desktop products for Windows and lots of Linux users complain on forums that we should have been written versions of our products for Linux years ago and the reason why we don't do that is we're a greedy corporation all our technical specialists are underqualified idiots Our average product is something like 3 million lines of C++ code. My and my colleagues analysis is the following: writing cross-platform C++ code is not that easy preparing a lot of distribution packages and maintaining them for all widespread versions of Linux takes time our estimate is that Linux market is something like 5-15% of all users and those users will likely not want to pay for our effort when this is brought up the response is again that we're greedy underqualified idiots and that when everything is done right all this is easy and painless. How reasonable are our evaluations of the fact that writing cross-platform code and maintaining numerous ditribution packages takes lots of effort? Where can we find some easy yet detailed analysis with real life stories that show beyond the shadow of a doubt what amount of effort exactly it takes?

    Read the article

  • GUI question : representing large tree

    - by Peter
    I have a tree-like datastructure of some six levels deep, that I would like to represent on a single webpage (can be tabs, trees; ....) In each level both childnodes and content are possible. Presenting it like a real tree would be not very usable (too big). I was thinking in the lines of hiding parts of the tree when you drill down and presenting a breadcrumbs or the like to keep you informed as to where you are... I guess my question boils down to : any ideas / examples ? Tx!

    Read the article

  • Are there any specific workflows or design patterns that are commonly used to create large functional programming applications?

    - by Andrew
    I have been exploring Clojure for a while now, although I haven't used it on any nontrivial projects. Basically, I have just been getting comfortable with the syntax and some of the idioms. Coming from an OOP background, with Clojure being the first functional language that I have looked very much into, I'm naturally not as comfortable with the functional way of doing things. That said, are there any specific workflows or design patterns that are common with creating large functional applications? I'd really like to start using functional programming "for real", but I'm afraid that with my current lack of expertise, it would result in an epic fail. The "Gang of Four" is such a standard for OO programmers, but is there anything similar that is more directed at the functional paradigm? Most of the resources that I have found have great programming nuggets, but they don't step back to give a broader, more architectural look.

    Read the article

  • How do you accept arguments in the main.cpp file and reference another file?

    - by Jason H.
    I have a basic understanding of programming and I currently learning C++. I'm in the beginning phases of building my own CLI program for ubuntu. However, I have hit a few snags and I was wondering if I could get some clarification. The program I am working on is called "sat" and will be available via command line only. I have the main.cpp. However, my real question is more of a "best practices" for programming/organization. When my program "sat" is invoked I want it to take additional arguments. Here is an example: > sat task subtask I'm not sure if the task should be in its own task.cpp file for better organization or if it should be a function in the main.cpp? If the task should be in its own file how do you accept arguments in the main.cpp file and reference the other file? Any thoughts on which method is preferred and reference material to backup the reasoning?

    Read the article

  • Can only connect to file server on second attempt

    - by Ross Fleming
    I have a FreeNas file server on my local network and I usually connect to it from Windows and Ubuntu computers. Ever since I have upgraded from Ubuntu 12.04 to 12.10 Ubuntu will only connect after a second attempt. By which I mean, I will browse to it via the file manager and once I click on the link in "Bookmarks" it complains that it could not connect. If I then try again it connects successfully and will keep up it's connection until the laptop is suspended or looses connection to the LAN for whatever reason. This isn't much of a problem as I don't mind having to click twice but my real problem is that this means that my scheduled backup will complain that it cannot connect to the storage device if it has not already been accessed during the current session. If there is some way to either stop the issue all together or to force the backup tool (default) to immediately have a second attempt at connecting.

    Read the article

  • Building Cloud Infrastructure? Don't Miss this Webcast with SEI

    - by Zeynep Koch
    WEBCAST: How did Oracle Linux Enable SEI to Save in Infrastructure Costs and Improve Business Response Date: Tuesday, October 30, 2012 Time: 9:00 AM PDT Using the Oracle technology stack, SEI, a leading provider of wealth management solutions, developed an innovative, global platform for its business. That platform is built on a highly integrated infrastructure, operating system, and middleware that allows the organization to scale with customer demand. In this Webcast, join SEI’s Martin Breslin as he discusses: Why and how SEI migrated from a mainframe-based infrastructure to an x86-based infrastructure on Oracle Linux Why SEI chose Oracle Linux, Oracle Enterprise Manager, and Oracle Real Application Cluster for its platform-as-a-service (PaaS) environment How Oracle Linux enabled SEI to save costs and improve response time Key success factors and lessons learned when deploying an enterprise cloud Speakers: Martin Breslin, Senior Infrastructure Architect, SEI Global Monica Kumar, Senior Director, Oracle Linux, Virtualization and MySQL Product Marketing  Register TODAY

    Read the article

  • Will backup using rsync preserve ACLs?

    - by Khaled
    I am using backuppc to backup my server. The backups are done using rsyncd. Currently, I am not using ACLs, but I am think it is good to activate it to have finer control over permissions. My question: Will backing up my files using rsync preserve the defined ACLs? BTW, I read an article about ACLs. They are saying that ubuntu does not support ACLs with tar. Is this real/old or not? I may not have this problem if I am using rsync. Is this right?

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >