Search Results

Search found 18314 results on 733 pages for 'document architecture'.

Page 119/733 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • Guest Blog: Secure your applications based on your business model, not your application architecture, by Yaldah Hakim

    - by Darin Pendergraft
    Today’s businesses are looking for new ways to engage their customers, embrace mobile applications, while staying in compliance, improving security and driving down costs.  For many, the solution to that problem is to host their applications with a Cloud Services provider, but concerns that a hosted application will be less secure continue to cause doubt. Oracle is recognized by Gartner as a leader in the User Provisioning and Identity and Access Governance magic quadrants, and has helped thousands of companies worldwide to secure their enterprise applications and identities.  Now those same world class IDM capabilities are available as a managed service, both for enterprise applications, as well has Oracle hosted applications. --- Listen to our IDM in the cloud podcast to hear Yvonne Wilson, Director of the IDM Practice in Cloud Service, explain how Oracle Managed Services provides IDM as a service ---Selecting OracleManaged Cloud Services to deploy and manage Oracle Identity Management Services is a smart business decision for a variety of reasons. Oracle hosted Identity Management infrastructure is deployed securely, resilient to failures, and supported by Oracle experts. In addition, Oracle  Managed Cloud Services monitors customer solutions from several perspectives to ensure they continue to work smoothly over time. Customers gain the benefit of Oracle Identity Management expertise to achieve predictable and effective results for their organization.Customers can select Oracle to host and manage any number of Oracle IDM products as a service as well as other Oracle’s security products, providing a flexible, cost effective alternative to onsite hardware and software costs.Security is a major concern for all organizations- making it increasingly important to partner with a company like Oracle to ensure consistency and a layered approach to security and compliance when selecting a cloud provider.  Oracle Cloud Service makes this possible for our customers by taking away the headache and complexity of managing Identity management infrastructure and other security solutions. For more information:http://www.oracle.com/us/solutions/cloud/managed-cloud-services/overview/index.htmlTwitter-https://twitter.com/OracleCloudZoneFacebook - http://www.facebook.com/OracleCloudComputing

    Read the article

  • Architecture for Social Graph data that has a Time Frame Associated?

    - by Jay Stevens
    I am adding some "social" type features to an existing application. There are a limited # of node & edge types. Overall the data itself is relatively small (50,000 - 70,000 for each type of node) there will be a number of edges (relationships) between them (almost all directional). This, I know, is relatively easy to represent with an SDF store (such as BrightstarDB) or something like Microsoft's Trinity (or really many of the noSQL options). The thing that, I think, makes this a unique use case is that each relationship will have a timeframe associated with it (start and end dates). Right now, I'm thinking of just storing this in a relational structure and dealing with the headaches of "traversing the graph", but I'm looking for suggestions on a better approach (both in terms of data structure and server): Column ================ From_Node_ID Relationship To_Node_ID StartDate EndDate Any suggestions or thoughts are welcomed.

    Read the article

  • How do I install Cacti 8.7i with PIA 3.1(Plug In Architecture) for 12.04 Server LTS?

    - by Pininy
    I am planning to upgrade my current 9.04 server version to 12.04LTS server. I've successfully backup and restore my Cacti version 8.7d to 8.7i that comes with the distribution. However, the plugin patch installation for PIA 3.1 it is not working for only ubuntu. Can you assist? or it's there a way to include Cacti 8.8a which is a stable version which comes with PIA 3.1 preinstalled. regards, thomas

    Read the article

  • Architecture Standards &ndash; BPMN vs. BPEL for Business Process Management

    - by pat.shepherd
    I get asked often which business process standard an organization should use; BPMN or BPEL?  As I explain to folks, they both have strengths.  Here is a great article that helps understand the benefits of both and where to use them.  The good news is that, with Oracle SOA Suite and BPM suite, you have the option and flexibility to use both in the same SCA model and runtime container.  Good stuff. Here is the great article that Mark Nelson wrote: The right tool for the right job BPEL and BPMN are both ‘languages’ or ‘notations’ for describing and executing business processes. Both are open standards. Most business process engines will support one or the other of these languages. Oracle however has chosen to support both and treat them as equals. This means that you have the freedom to choose which language to use on a process by process basis. And you can freely mix and match, even within a single composite. (A composite is the deployment unit in an SCA environment.) So why support both? Well it turns out that BPEL is really well suited to modeling some kinds of processes and BPMN is really well suited to modeling other kinds of processes. Of course there is a pretty significant overlap where either will do a great job What BPM adds to SOA Suite | RedStack

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • At which architecture level are you running BDD tests (e.g. Cucumber)

    - by Pete
    I have in the last year gotten quite fond of using SpecFlow (which is a .NET port of Cucumber) I have used it both to test a ASP.NET MVC application at the web layer, i.e. using browser automation, but also at the controller layer. The first gives me a higher confidence in the correctness of the application, because JavaScript is tested, and improper controller configuration is also caught. But those tests are slower to execute, and more complex to implement, than those just testing on the controller layer. My tests are full functional tests, i.e. they exercise all layers of the application, all the way down to the database. So the first thing before any scenario is that the database is cleared of data, allowing the test to assume that only data specified in the "Given" block exists. Then I see example on how to use it, where they test just exercise the model layer. So what are your experiences with these tools? Which layer of the application do you test?

    Read the article

  • How do I cross-compile my application for Ubuntu 12.04 armhf architecture on a Ubuntu 12.04 i386 host?

    - by Jonathan Cave
    I have a large application I have written. I can successfully compile the application in the following scenarios: in a native compilation for the i386 host running Ubuntu 12.04 natively on a PandaBoard running Ubuntu 12.04 (this takes a long time) using Qemu and a chroot on the host PC for the armhf PandaBoard target (this takes a very long time) I would like to cross-compile the application on the i386 host to run on a target such as the PandaBoard to complete builds in a timely fashion. So far attempts made using the arm-linux-gnueabihf tool chain in the repositories has produced binaries that do not run correctly. At this stage, I have no plans to package the software. What is the recommended way to achieve a successful cross-compile?

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • How to model interentity membership in entity-component architecture?

    - by croxis
    I'm falling in love with simple grace of entity-component design, although I still have issues breaking from MVC and OOP practices. Some of my game entities have membership relationships with each other (ex: a player is a member of a city, a city is a member of a nation), and I am unsure on the best way to implement it. My initial reaction is to have a a MemberOfCity component that points to the appropriate city component, but components are suppose to have no references to each other. My other option is to have a System do it, but that would require the system to persist data outside of a component. Is there a clean way to do this in an entity-component design, or am I trying to use a hammer on a screw and should use a hybrid/another approach?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Working with documents and SharePoint - Best practices

    - by KunaalKapoor
    Follow these simple guidelines to make collaboration using SharePoint easier:1. File Name:While it is allowed to use spaces in your filename (and maybe it seems even logical to do so), don’t use them if your file will end up (or is born on) SharePoint. When you use the “download a copy” functionality, SharePoint will replace the spaces with an “_”. This might (will) result in inconsistency when you upload the “same” file again, since SharePoint will see this as a different file (since the filename is different). I recommend using a filename with Capitalization style naming guideline. For instance: the document “Overall governance model.docx” would be named “OverallGovernanceModel.docx”Use the TITLE field in the office applications to give your document a title (and subtitle and keywords, .) The title column can be used in a view in a library. You can get to the document properties by clicking on 'Office Button/Prepare/Properties'. (Office 2007). This is metadata that is stored with the document, and will remain in the document (even if you exchange this document via e-mail, via an external hard drive). The filename cannot be longer than 128 characters. (and that is IMHO far beyond reasonable) You cannot use any of these characters: ” # % & * : < > ? \ / { | } ~ 2. Versioning:SharePoint has a built-in versioning system. You can work with major (published) versions, and minor (draft) versions. Of each of these two document types, you can store a numbers of versions that are kept. Watch out, each version is saved, not only the delta between 2 versions, and this counts to your Site Collection Quota. (Example: you have a Word document with a size of 2 MB. When you keep 5 Drafts this will result in storing (and consuming) 10 MB.So, don’t call your document “NewUserAccountProcessDRAFTv1.docx”, but “NewUserAccountProcess.docx” and use versioning setting in your library.You can enable views on your library to display the version number.You can enable the version number to be displayed in a Word document.3. Use MetadataUse metadata to assign other properties to documents, so it can be easily identified, sorted- or grouped by.

    Read the article

  • If most of team can't follow the architecture, what do you do?

    - by Chris
    Hi all, I'm working on a greenfields project with two other developers. We're all contractors, and myself and one other just started working on the project while the orginal one has been doing most of the basic framework coding. In the past month, my fellow programmer and I have been just frustrated by the design descisions done by our co-worker. Here's a little background information: The application at face value appeared to be your standard n-layered web application using C# on the 3.5 framework. We have a data layer, business layer and a web interface. But as we got deeper into the project we found some very interesting things that have caused us some troubles. There is a custom data access sqlHelper type base which only accepts dictionary key/valued entries and returns only data tables. There are no entity objects, but there are some massive objects which do everything and then are tossed into session for persitance. The general idea is that the pages (.aspx) don't do anything, while the controls (.ascx) do everything. The general flow is that a client clicks on a button, which goes to a user control base which passes a process request to the 'BLL' class which goes to the page processor, which then goes to a getControlProcessor, which at last actually processes the request. The request itself is made up of a dictionary which is passing a string valued method name, stored procedure name, a control name and possibly a value. All switching of the processing is done by comparing the string values of the control names and method names. Pages are linked together via a common header control that uses a combination of javascript and tables to create a hyperlink effect. And as I found out yesterday, a simple hyperlink between one page and another does not work because of the need to have quite a bit of information in session to determine which control to display on a page. My fellow programmer and I both believe that this is a strange and uncommon approach to web application development. Both of us have been in this business for over five years and neither of us have seen this approach. My question is this, how would we approach our co-worker and voice our concerns and what should we do if he does not want to accept the criteic? We both do not want to insult the work that has been done, but feel that going forward will create a nightmare for development. Thanks for your comments.

    Read the article

  • Is there any kind of established architecture for browser based MMO games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that?

    Read the article

  • Calling a webservice via Javascript

    - by jeroenb
    If you want to consume a webservice, it's not allways necessary to do a postback. It's even not that hard! 1. Webservice You have to add the scriptservice attribute to the webservice. [System.Web.Script.Services.ScriptService]public class PersonsInCompany : System.Web.Services.WebService { Create a WebMethod [WebMethod] public Person GetPersonByFirstName(string name) { List<Person> personSelect = persons.Where(p => p.FirstName.ToLower().StartsWith(name.ToLower())).ToList(); if (personSelect.Count > 0) return personSelect.First(); else return null; } 2. webpage Add reference to your service to your scriptmanager <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script> Add some javascript, where you first call your webservice. Classname.Webmethod = PersonsInCompany.GetPersonByFirstName Add a callback to catch the result from the webservice. And use the result to update your page. <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script>   If you have any question, feel free to contact me! You can download the code here.

    Read the article

  • Software Architecture: How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • In choosing a service-oriented architecture framework that needs to work with .NET and with Java, what to look for?

    - by cm007
    I planning to write an application in which there will be a service (call it A) listening for particular commands. This service will then relay those commands to other services (call them B and C) which are written, respectively, in .NET and Java (service A chooses which of service B or C to which to relay depending on the contents of the request to service A). I am looking for a framework that will allow for interoperability with both .NET and with Java, for example WCF or JAX-WS, or writing a custom framework (e.g., JSON REST commands over HTTP, similar to http://code.google.com/p/selenium/wiki/JsonWireProtocol). What questions/aspects should I consider in deciding?

    Read the article

  • MMORPG Server architecture: How to handle player input (messages/packets) while the server has to update many other things at the same time?

    - by Renann
    Yes, the question is is very difficult. This is more or less like what I'm thinking up to now: while(true) { if (hasMessage) { handleTheMessage(); } } But while I'm receiving the player's input, I also have objects that need to be updated or, of course, monsters walking (which need to have their locations updated on the game client everytime), among other things. What should I do? Make a thread to handle things that can't be stopped no matter what? Code an "else" in the infinity loop where I update the other things when I don't have player's input to handle? Or even: should I only update the things that at least one player can see? These are just suggestions... I'm really confused about it. If there's a book that covers these things, I'd like to know. It's not that important, but I'm using the Lidgren lib, C# and XNA to code both server and client. Thanks in advance.

    Read the article

  • TinyMCE include crashes IE8

    - by dkris
    I am trying to open a popup onclick using a function call as shown below. <a id="forgotPasswordLink" href="#" onclick="openSupportPage(document.getElementById('forgotPasswordLink').innerHTML);"> Some Text </a> I am creating the HTML for the pop up page on the fly and also including the TinyMCE source file over there. The code is as shown below: <script type="text/javascript"> <!-- function openSupportPage(unsafeSupportText) { var features="width=700,height=400,status=yes,toolbar=no,menubar=no,location=no,scrollbars=yes"; var winId=window.open('','Test',features); winId.focus(); winId.document.open('text/html','replace'); winId.document.write('<html><head><title>' + document.title + '</title><link rel="stylesheet" href="./css/default.css" type="text/css">\n'); winId.document.write('<script src="./js/tiny_mce/tiny_mce.js" type="text/javascript" language="javascript">Script_1</script>\n'); winId.document.write('<script src="./js/support_page.js" type="text/javascript" language="javascript">Script_2</script>\n'); winId.document.write('</head><body onload="inittextarea()">\n');/*function call which will use the TinyMCE source file*/ winId.document.write(' \n'); winId.document.write('<p>&#160;</p>'); var hiddenFrameHTML = document.getElementById("HiddenFrame").innerHTML; hiddenFrameHTML = hiddenFrameHTML.replace(/&amp;/gi, "&"); hiddenFrameHTML = hiddenFrameHTML.replace(/&lt;/gi, "<"); hiddenFrameHTML = hiddenFrameHTML.replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML); winId.document.write('<textarea id="content" rows="10" style="width:100%">\n'); winId.document.write(document.getElementById(top.document.forms[0].id + ":supportStuff").innerHTML); winId.document.write('</textArea>\n'); var hiddenFrameHTML2 = document.getElementById("HiddenFrame2").innerHTML; hiddenFrameHTML2 = hiddenFrameHTML2.replace(/&amp;/gi, "&").replace(/&lt;/gi, "<").replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML2); winId.document.write('</body></html>\n'); winId.document.close(); } // --> </script> The support.js file contains the following. function inittextarea() { tinyMCE.init({ elements : "content", mode : "exact", theme : "advanced", readonly : true, setup : function(ed) { ed.onInit.add(function() { tinyMCE.activeEditor.execCommand("mceToggleVisualAid"); }); } }); } The problem arises when the onclick event is fired and the pop up opens up, IE8 stops responding and seems to hang. It is working fine on Chrome, Firefox and Safari. I feel that the issue is because of TinyMCE script inclusion because on commenting the lines that include the tiny_mce.js and the support_page.js, the popup renders with no issues. I am also using the latest TinyMCE release. Please let me know why this is happening and what could be the resolution.

    Read the article

  • What is better: Developing a Web project in MVC or N -Tier Architecture?

    - by Starx
    I have asked a similar question before and got an convincing answer as well? http://stackoverflow.com/questions/2843311/what-is-difference-of-developing-a-website-in-mvc-and-3-tier-or-n-tier-architectu Due to the conclusion of this question I started developing projects in N-tier Architecture. Just about an hour ago, I asked another question, about what is the best design pattern to create interface? There the most voted answer is suggesting me to use MVC architecture. http://stackoverflow.com/questions/2930300/what-is-the-best-desing-pattern-to-design-the-interface-of-an-webpage Now I am confused, First post suggested me that both are similar, just a difference that in N-tier, the tier are physically and logically separated and one layer has access to the one above and below it but not all the layers. I think ASP.net used 3 Tier architecture while developing applications or Web applications. Where as frameworks like Zend, Symphony they use MVC. I just want to stick to a pattern that is best suitable for WebProject Development? May be this is a very silly confusion? But if someone could clear this confusion, that would be very greatful?

    Read the article

  • How do I add a version number field to an office 2007 docx document?

    - by Jon Cage
    I've been having a crack at using fields in Word 2007 and have hit a slight stumbling block. I want to add a field which I can use in several parts of the document to represent the current version (something of the form v0.1 but I can't see an obvious way to do it). The only provision I've found for this is something called RevNum but that gets updated every time I save the document. Is there a field I've missed or a way of adding custom fields or something?

    Read the article

  • Server and Application architecture for large outgoing email volume.

    - by Ezequiel
    Hi, we need to develop an application to send large amount of emails (newsletters) We estimate 15 millions of emails per month (6 - 10 emails per seconds). Would you recommend me the proper architecture for this application? should we have several MTA agents and use them in a round robin fashion? What considerations should we take on account to not being treated as spammers (its really not spam what we are going to send). Thanks for your help. Ezequiel

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >