Search Results

Search found 18314 results on 733 pages for 'document architecture'.

Page 44/733 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • JavaScript source file not loading in IE8 Popup

    - by dkris
    I have an issue where the JavaScript source file is loading in popup for IE6, Chrome, Firefox, Safari and Opera. But the same source file is not loading up in IE8. As a result of this the HTML is not being replaced in the Popup and I am getting an error in IE8 popup saying tinyMCE is not defined I have referred to http://stackoverflow.com/questions/2949234/formatting-this-javascript-line and solved issue on all browsers except IE8. The JavaScript function is as follows: function openSupportPage() { var features="width=700,height=400,status=yes,toolbar=no,menubar=no,location=no,scrollbars=yes"; var winId=window.open('','',features); winId.document.open(); winId.document.write('<html><head><title>' + document.title + '</title><link rel="stylesheet" href="../css/default.css" type="text/css">\n'); var winDoc = winId.document; var sEl = winDoc.createElement("script"); sEl.src = "../js/tiny_mce/tiny_mce.js";/*TinyMCE source file*/ sEl.type="text/javascript"; winDoc.getElementsByTagName("head")[0].appendChild(sEl); winId.document.write('<script type="text/javascript">\n'); winId.document.write('function inittextarea() {\n'); winId.document.write('tinyMCE.init({ \n'); winId.document.write('elements : "content",\n'); winId.document.write('theme : "advanced",\n'); winId.document.write('readonly : true,\n'); winId.document.write('mode : "exact",\n'); winId.document.write('theme : "advanced",\n'); winId.document.write('readonly : true,\n'); winId.document.write('setup : function(ed) {\n'); winId.document.write('ed.onInit.add(function() {\n'); winId.document.write('tinyMCE.activeEditor.execCommand("mceToggleVisualAid");\n'); winId.document.write('});\n'); winId.document.write('}\n'); winId.document.write('});}</script>\n'); window.setTimeout(function () {/*using setTimeout to wait for the JS source file to load*/ winId.document.write('</head><body onload="inittextarea()">\n'); winId.document.write(' \n'); var hiddenFrameHTML = document.getElementById("HiddenFrame").innerHTML; hiddenFrameHTML = hiddenFrameHTML.replace(/&amp;/gi, "&"); hiddenFrameHTML = hiddenFrameHTML.replace(/&lt;/gi, "<"); hiddenFrameHTML = hiddenFrameHTML.replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML); winId.document.write('<textarea id="content" rows="10" style="width:100%">\n'); winId.document.write(document.getElementById(top.document.forms[0].id + ":supportStuff").innerHTML); winId.document.write('</textArea>\n'); var hiddenFrameHTML2 = document.getElementById("HiddenFrame2").innerHTML; hiddenFrameHTML2 = hiddenFrameHTML2.replace(/&amp;/gi, "&"); hiddenFrameHTML2 = hiddenFrameHTML2.replace(/&lt;/gi, "<"); hiddenFrameHTML2 = hiddenFrameHTML2.replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML2); winId.document.write('</body></html>\n'); winId.document.close(); }, 300); } Additional Information: Screen shot of the page Rendered HTML Original JSPF please help me with this one.

    Read the article

  • Sharp architecture; Accessing Validation Results

    - by nabeelfarid
    I am exploring Sharp Architecture and I would like to know how to access the validation results after calling Entity.IsValid(). I have two scenarios e.g. 1) If the entity.IsValid() return false, I would like to add the errors to ModelState.AddModelError() collection in my controller. E.g. in the Northwind sample we have an EmployeesController.Create() action when we do employee.IsValid(), how can I get access to the errors? public ActionResult Create(Employee employee) { if (ViewData.ModelState.IsValid && employee.IsValid()) { employeeRepository.SaveOrUpdate(employee); } // .... } [I already know that when an Action method is called, modelbinder enforces validation rules(nhibernate validator attributes) as it parses incoming values and tries to assign them to the model object and if it can't parse the incoming values  then it register those as errors in modelstate for each model object property. But what if i have some custom validation. Thats why we do ModelState.IsValid first.] 2) In my test methods I would like to test the nhibernate validation rules as well. I can do entity.IsValid() but that only returns true/ false. I would like to Assert against the actual error not just true/ false. In my previous projects, I normally use a wrapper Service Layer for Repositories, and instead of calling Repositories method directly from controller, controllers call service layer methods which in turn call repository methods. In my Service Layer all my custom validation rules resides and Service Layer methods throws a custom exception with a NameValueCollection of errors which I can easily add to ModelState in my controller. This way I can also easily implement sophisticated business rules in my service layer as well. I kow sharp architecture also provides a Service Layer project. But what I am interested in and my next question is: How I can use NHibernate Vaidators to implement sophisticated custom business rules (not just null,empty, range etc.) and make Entity.IsValid() to verify those rules too ?

    Read the article

  • Python libusb pyusb "mach-o, but wrong architecture"

    - by Jon
    I am having some trouble with the pyusb module. I have narrowed down the problem to a single line, and have created a small example script to replicate the error. #!/usr/bin/env python """ This module was created to isolate the problem in the pyusb package. Operating system: Mac OS 10.6.3 Python Version: 2.6.4 libusb 1.0.8 has been successfully installed using: sudo port install libusb I have also tried modifying /opt/local/etc/macports/macports.conf to force the i386 architecture instead of x86_64. """ from ctypes import * import ctypes.util libname = ctypes.util.find_library('usb-1.0') print 'libname: ', libname l = CDLL(libname, RTLD_GLOBAL) # RESULT: #libname: /usr/local/lib/libusb-1.0.dylib #Traceback (most recent call last): # File "./pyusb_problem.py", line 7, in <module> # l = CDLL(libname, RTLD_GLOBAL) # File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ctypes/__init__.py", line 353, in __init__ # self._handle = _dlopen(self._name, mode) #OSError: dlopen(/usr/local/lib/libusb-1.0.dylib, 10): no suitable image found. Did find: # /usr/local/lib/libusb-1.0.dylib: mach-o, but wrong architecture # End of File This same script runs on Ubuntu 10.04 successfully. I have tried building the libusb module (directly from source AND through macports) for 32-bit (i386) instead of x86_64 (default for OS 10.6), but I receive the same error. Thank-you in advance for your help!

    Read the article

  • sharp architecture question

    - by csetzkorn
    I am trying to get my head around the sharp architecture and follow the tutorial. I am using this code: using Bla.Core; using System.Collections.Generic; using Bla.Core.DataInterfaces; using System.Web.Mvc; using SharpArch.Core; using SharpArch.Web; using Bla.Web; namespace Bla.Web.Controllers { public class UsersController { public UsersController(IUserRepository userRepository) { Check.Require(userRepository != null,"userRepository may not be null"); this.userRepository = userRepository; } public ActionResult ListStaffMembersMatching(string filter) { List<User> matchingUsers = userRepository.FindAllMatching(filter); return View("ListUsersMatchingFilter", matchingUsers); } private readonly IUserRepository userRepository; } } I get this error: The name 'View' does not exist in the current context I have used all the correct using statements and referenced the assemblies as far as I can see. The views live in Bla.Web in this architecture. Can anyone see the problem? Thanks. Christian

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • Web application architecture, and application servers?

    - by seanieb
    Hi, I'm building a web application, and I need to use an architecture that allows me to run it on two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. As you can see I'm fairly clueless. I'm familiar with using Php, MySQL and the basics of Python, but bringing this all together using something more complex than a cron job is new to me. How do go about implementing this? What frame work(s) should I use? Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) Is Cakephp a good solution? If so will I be able to control and monitor the Python code using it?

    Read the article

  • make arm architecture c library in mac

    - by gamegamelife
    I'm trying to make my own c library in Mac and include it to my iphone program. The c code is simple , like this: math.h: int myPow2(int); math.c: #include "math.h" int myPow2(int num) { return num*num; } I search how to make the c library file ( .a or .lib ..etc) seems need to use gcc compiler (Is there other methods?) so I use this command: gcc -c math.c -o math.o ar rcs libmath.a math.o And include it in iPhone Project. Now it has the problem when build xcode iphone project. "file was built for unsupported file format which is not the architecture being linked" I found some pages discuss about the problem, but no detail how to make the i386/arm architecture library. And I finally use this command to do it: gcc -arch i386 -c math.c -o math.o /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/arm-apple-darwin10-gcc-4.2.1 -c math.c -o math.o I dont know if this method is correct? Or there has another method to do it?

    Read the article

  • UCM 11g is 4 days old!

    - by kyle.hatlestad
    Ok...so I missed posting a blog entry when UCM 11g and the entire ECM suite released on Tuesday. Hopefully you've already seen the announcements on any number of the Oracle ECM blogs out there such as ECM Alerts, Fusion ECM, bex huff, or C4. So I won't bore you with the same talking points like 179 million check-ins per day or 124 web site page hits per second. Instead, I thought I'd show some screenshots of the new features in UCM and URM 11g. WebLogic Server and Enterprise Manager So probably the biggest change in 11g is UCM and URM now run on top of the WebLogic Server application server. This is a huge step as ECM is now on a standard platform with the rest of Oracle Fusion Middleware which makes installation, configuration, and integration consistent among all the products. From a feature perspective, it's also beneficial because it's now integrated with Oracle Enterprise Manager. Enterprise Manager provides a lot of provisioning control over servers as well as performance monitoring and access to logs and debugging information. Desktop Integration Suite Desktop Integration Suite got a complete overhaul for 11g. It exposes a lot more features within Windows Explorer such as saved searches, workflow queue, and checked-out items. It also now support metadata pop-up screens to let users fill in additional metadata when they drag-n-drop files in! And the integration within Office applications has changed significantly by introducing a dedicated UCM menu to do open, save, compare, etc. Site Studio for External Applications In UCM Site Studio 10gR4, a major architectural shift was introduced which brought several new objects such as elements, region definitions, region templates, and placeholder definitions. This truly separated the content from the display and from the definition. It also allowed separation of the content from needing to be rendered on a complete Site Studio page. Well, the new Site Studio for External Applications takes advantage of that architecture and introduces pre-built tags and plug-ins to JDeveloper to allow to go from simply adding a content area to your web application page to building an entire web site, just like you would have done in Site Studio Designer. In addition to these changes, enhancements to the core Site Studio have been added as well. One of the big ones is called Designer Mode which allows power-users to bypass the standard rules defined by the placeholder definition or template and perform any number of additional actions. This reduces the need to go back to Site Studio Designer or JDeveloper to make more advanced changes to the site. Dashboards As part of the updated records management functionality in both UCM and URM, users can now set a dashboard view on their home page to surface common functions in a single view. It has pre-built "portlets" users can choose from to display and organize they way they want. Behind the scenes, these dashboards are stored as Content Folios. So the dashboards themselves are content items that can be revisioned and shared between users. And new dashboard portlets can be easily added (like the User Profile one in the screenshots) by getting a copy of an existing one, modifying the display, and then checking it in as a new one to select from. URM Interface Enhancements URM includes several new UI and usability enhancements in 11g. There is a new view for physical records, a place to configure "favorite" items to quickly get to, and new placement of the records management menu. BI Publisher Reports Records management in UCM and URM now offer reports generated through embedded BI Publisher. Templates are controlled by rich text files checked directly into the repository, so they can be easily modified. Other Features A new Inbound Refinery conversion option is available that does native Microsoft Office HTML conversion. If your IBR is on Windows and you have the native applications loaded, the IBR can use them to produce HTML. A new GUI template editor for Dynamic Converter is available. It's written in Java so is available through all the supported browsers and platforms. The original ActiveX based editor is also still available. The Component Manager interface has changed to help provide an easier and more descriptive way to enable core components that are installed along with UCM. All of the supported components are immediately available to turn on and do not have to be installed separately as in previous versions. My Downloads is located in the My Content Server menu and provides for easy download of client installs including Desktop Integration Suite and Site Studio Designer. Well, hopefully that gives you a taste for some of the new things in 11g. We're all pretty excited here at Oracle about all the new changes and enhancements. Over the next few months I hope to highlight some of these features more in-depth, so keep your eye out for those posts.

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • Design considerations on JSON schema for scalars with a consistent attachment property

    - by casperOne
    I'm trying to create a JSON schema for the results of doing statistical analysis based on disparate pieces of data. The current schema I have looks something like this: { // Basic key information. video : "http://www.youtube.com/watch?v=7uwfjpfK0jo", start : "00:00:00", end : null, // For results of analysis, to be populated: // *** This is where it gets interesting *** analysis : { game : { value: "Super Street Fighter 4: Arcade Edition Ver. 2012", confidence: 0.9725 } teams : [ { player : { value : "Desk", confidence: 0.95, } characters : [ { value : "Hakan", confidence: 0.80 } ] } ] } } The issue is the tuples that are used to store a value and the confidence related to that value (i.e. { value : "some value", confidence : 0.85 }), populated after the results of the analysis. This leads to a creep of this tuple for every value. Take a fully-fleshed out value from the characters array: { name : { value : "Hakan", confidence: 0.80 } ultra : { value: 1, confidence: 0.90 } } As the structures that represent the values become more and more detailed (and more analysis is done on them to try and determine the confidence behind that analysis), the nesting of the tuples adds great deal of noise to the overall structure, considering that the final result (when verified) will be: { name : "Hakan", ultra : 1 } (And recall that this is just a nested value) In .NET (in which I'll be using to work with this data), I'd have a little helper like this: public class UnknownValue<T> { T Value { get; set; } double? Confidence { get; set; } } Which I'd then use like so: public class Character { public UnknownValue<Character> Name { get; set; } } While the same as the JSON representation in code, it doesn't have the same creep because I don't have to redefine the tuple every time and property accessors hide the appearance of creep. Of course, this is an apples-to-oranges comparison, the above is code while the JSON is data. Is there a more formalized/cleaner/best practice way of containing the creep of these tuples in JSON, or is the approach above an accepted approach for the type of data I'm trying to store (and I'm just perceiving it the wrong way)? Note, this is being represented in JSON because this will ultimately go in a document database (something like RavenDB or elasticsearch). I'm not concerned about being able to serialize into the object above, because I can always use data transfer objects to facilitate getting data into/out of my underlying data store.

    Read the article

  • New security configuration flag in UCM PS3

    - by kyle.hatlestad
    While the recent Patch Set 3 (PS3) release was mostly focused on bug fixes and such, a new configuration flag was added for security. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was possible to enable these ACL's without having the Collaboration Manager component enabled. But it took some special instructions (see technote# 603148.1) and added some extraneous pieces still related to Collaboration Manager. When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they've been cleaned up a bit and a new configuration flag has been added to simply turn on the ACL fields and none of the other collaboration bits. To enable ACLs: UseEntitySecurity=true Along with this configuration flag to turn ACLs on, you also need to define which Security Groups will honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. SpecialAuthGroups=HumanResources,Legal,Marketing Save the settings and restart the instance. Upon restart, two new metadata fields will be created: xClbraUserList, xClbraAliasList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • How do I keep user input and rendering independent of the implementation environment?

    - by alex
    I'm writing a Tetris clone in JavaScript. I have a fair amount of experience in programming in general, but am rather new to game development. I want to separate the core game code from the code that would tie it to one environment, such as the browser. My quick thoughts led me to having the rendering and input functions external to my main game object. I could pass the current game state to the rendering method, which could render using canvas, elements, text, etc. I could also map input to certain game input events, such as move piece left, rotate piece clockwise, etc. I am having trouble designing how this should be implemented in my object. Should I pass references to functions that the main object will use to render and process user input? For example... var TetrisClone = function(renderer, inputUpdate) { this.renderer = renderer || function() {}; this.inputUpdate = input || function() {}; this.state = {}; }; TetrisClone.prototype = { update: function() { // Get user input via function passed to constructor. var inputEvents = this.inputUpdate(); // Update game state. // Render the current game state via function passed to constructor. this.renderer(this.state); } }; var renderer = function(state) { // Render blocks to browser page. } var inputEvents = {}; var charCodesToEvents = { 37: "move-piece-left" /* ... */ }; document.addEventListener("keypress", function(event) { inputEvents[event.which] = true; }); var inputUpdate = function() { var translatedEvents = [], event, translatedEvent; for (event in inputEvents) { if (inputEvents.hasOwnProperty(event)) { translatedEvent = charCodesToEvents[event]; translatedEvents.push(translatedEvent); } } inputEvents = {}; return translatedEvents; } var game = new TetrisClone(renderer, inputUpdate); Is this a good game design? How would you modify this to suit best practice in regard to making a game as platform/input independent as possible?

    Read the article

  • script only works in IE

    - by Alex
    I have the following JavaScript for show running line: <script type="text/javascript" language="javascript"> //Change script's width (in pixels) var marqueewidth=800 //Change script's height (in pixels, pertains only to NS) var marqueeheight=20 //Change script's scroll speed (larger is faster) var speed=3 //Change script's contents var marqueecontents='You text here' if (document.all) document.write('<marquee scrollAmount='+speed+' style="width:'+marqueewidth+'">'+marqueecontents+'</marquee>') function regenerate(){ window.location.reload() } function regenerate2(){ if (document.layers){ setTimeout("window.onresize=regenerate",450) intializemarquee() } } function intializemarquee(){ document.cmarquee01.document.cmarquee02.document.write('<nobr>'+marqueecontents+'</nobr>') document.cmarquee01.document.cmarquee02.document.close() thelength=document.cmarquee01.document.cmarquee02.document.width scrollit() } function scrollit(){ if (document.cmarquee01.document.cmarquee02.left>=thelength*(-1)){ document.cmarquee01.document.cmarquee02.left-=speed setTimeout("scrollit()",100) } else{ document.cmarquee01.document.cmarquee02.left=marqueewidth scrollit() } } window.onload=regenerate2 </script> What should I change in script to make it work in FF and Chrome? Thanks

    Read the article

  • Windows 7: Menu 'New->Text document' is missing, when not admin user

    - by Isamux
    Hi, when I'm logged in as a user that is not member of the administrator group the entry to create a new textfile is missing from the right click "New" menu. If I give the user admin rights or start the explorer with admin rights the "New - text document" menu entry magically appears. As far as I can see the registry entries are correct. Anybody got a solution for that side effect of beeing a normal user in windows?? Regards

    Read the article

  • How to enlarge a PDF document on Kindle?

    - by Gustavo
    Kindle doesn't zoom in a PDF document, so I'd like to know if there is a way to enlarge the file myself before it being displayed on the kindle screen. I've tried to convert some PDF files to the .azw kindle format, but the images weren't converted. So I have to find another way.

    Read the article

  • How to document mail setup after hand-over.

    - by BradyKelly
    I've just moved a client's email services over from my host to Google Apps. I would like to hand over a document providing all they (or their agent) need should I not be available etc. How are such documents normally structured, and what level of detail should they contain? I know user names and passwords are essential, and instructions on how to manage domains on Google Apps are over the top, but what is a commonly used middle ground?

    Read the article

  • Missing fields in editform for SharePoint document library

    - by noesgard
    I have a document library, where one of the columns does not show up in the edit and new form. How do I get back in the forms, so that the users can edit (and create) items including this column/field. The field is perfectly visible in "datasheet" mode where you can edit too but I would love to have the forms back in order :o)

    Read the article

  • How to enlarge a PDF document on Kindle?

    - by Gus
    Kindle doesn't zoom in a PDF document, so I'd like to know if there is a way to enlarge the file myself before it being displayed on the kindle screen. I've tried to convert some PDF files to the .azw kindle format, but the images weren't converted. So I have to find another way.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >