Search Results

Search found 66971 results on 2679 pages for 'nested set model'.

Page 30/2679 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to save data from model without any association in cakephp [on hold]

    - by Abhishek
    I have base model in that i use dataManipulation method for updation in my code ,so i want to save data in receipt, receiptline model also in OpeningBankStatement.but i create association for receipt and receiptline not for OpeningBankStatement. So i want to save data this OpeningBankStatement model without any association.my demo code is. Array ( [Receipt] => Array ( [ID] => 566 [ObjectType] => 84 [TXNName] => bbnm [TXNDate] => 03-06-2014 [BranchID] => 1 [Narration1] => 267 [Narration] => Cheque Received [ExecutiveID] => 805 [AccountType] => 104 [Account] => 68 [ReferenceNo] => [TXNCurrencyID] => 3 [ExchangeRate] => 1.00000 [ManualAdiustment] => 0 [RevisionNumber] => 1 [CompanyID] => 1 [Status] => 633 ) [ReceiptLine] => Array ( [0] => Array ( [TXNID] => 566 [LineNo] => 0 [LineType_072] => 429 [BranchID] => 1 [AccountID] => 68 [ContactID] => [Amount] => 0 [CancelAmount] => 0 [OpenAmount] => 0 [Narration] => Cheque Received [CreatedBy] => 229 [ModifiedBy] => 229 [CreatedDate] => 2014-06-03 00:00:00 [ModifiedDate] => 2014-06-03 00:00:00 [Status] => 1 [RevisionNumber] => 1 [RowState] => [tmpInstrumentDate] => ) [1] => Array ( [LineNo] => 0 [RowState] => 436 [TXNID] => 0 [BranchID] => 1 [ContactID] => [AccountID] => 68 [Narration] => Cheque Received [Amount] => 0 [RevisionNumber] => 1 [LineType_072] => 460 [CancelAmount] => 0 [OpenAmount] => 0 [Status] => 1 ) ) [OpeningBankStatement] => Array ( [ObjectType] => 131 [TXNSeries] => 1 [TXNNo] => 12345 [TXNName] => bbnm [TXNDate] => 03-06-2014 [CompanyID] => 1 [AccountID] => 68 [ExecutiveID] => 805 [Narration] => Cheque Received [ReferenceNo] => [ParentObjectType] => 84 [ParentTXNID] => 1 [CancelledBy] => 1 [CancelledDate] => 2014-02-02 [CancellationRemarks] => hfg [Status] => 1 [RevisionNumber] => 1 ) ) By any dyanamic model association or callback method it solve? suggest solution.

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • jQuery open and close nested ul nav depending on location

    - by firefusion
    I'm making a sub nav in wordpress and have a nested list style menu. An example of the HTML is below. Whichever is the current item has the li class "current_page_item". I need all child menus collapsed unless there is a current_page_item class on the parent or one of the children. <ul> <li class="current_page_item"><a href="#">Parent Item</a> <ul class="children"> <li><a href="#">Child page</a></li> <li><a href="#">Child page</a></li> <li><a href="#">Child page</a></li> <li><a href="#">Child page</a></li> </ul> </li> <li><a href="#">Parent Item</a> <ul class="children"> <li><a href="#">Child page</a></li> <li><a href="#">Child page</a></li> </ul> </li> <li><a href="#">Parent Item</a> <ul class="children"> <li><a href="#">Child page</a></li> <li><a href="#">Child page</a></li> </ul> </li> <li><a href="#">Parent Item</a></li> <li><a href="#">Parent Item</a></li> </ul> This so far, which works but i wonder if it can be improved as there is some flashing open and then closed again.... jQuery('ul.children').slideUp(); jQuery('li.current_page_item ul.children').slideDown('medium'); jQuery('li.current_page_item').parent().slideDown('medium');

    Read the article

  • How do I get the Zend_Application's database into a model class?

    - by Billy ONeal
    I have a Zend_Framework application, which has a whole bunch of model classes. I need these model classes to be able to access the application's database (naturally). Currently I've put this in my index.php: Zend_Registry::set('db', $application->bootstrap()->getBootstrap() ->getPluginResource('db')->getDbAdapter()); And then $db = Zend_Registry::get('db'); in each of my model classes that require the database. But this seems like a horrible horrible hack. Am I missing something basic here?

    Read the article

  • Is this type of calculation to be put in Model or Controller?

    - by Hadi
    i have Product and SalesOrder model (to simplify, 1 sales_order only for 1 product) Product has_many SalesOrder SalesOrder belongs_to Product pa = Product A #2000 so1 = SalesOrder 1 order product A #1000, date:yesterday so2 = SalesOrder 2 order product A #999, date:yesterday so3 = SalesOrder 3 order product A #1000, date:now Based on the date, pa.find_sales_orders_that_can_be_delivered will give: SalesOrder 1 order product A #1000, date:yesterday SalesOrder 2 order product A #999, date:yesterday SalesOrder 3 order product A #1, date:now <-- the newest The question is: is find_sales_orders_that_can_be_delivered should be in the Model? i can do it in controller. and the general question is: what goes in Model and what goes in Controller. Thank you

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • Doctrine: Relating a model to itself using a link table, like "This event is related to to the following other events"

    - by mattalexx
    So in English, the relationship would sound like "This event is related to to the following other events". My first instinct is to create an EventEvent model, with a first_event_id field and a second_event_id field. Then I would define the following two relationships in the Event model: $this->hasMany('Event as FirstRelatedEvents', array('local' => 'first_event_id', 'foreign' => 'second_event_id', 'refClass' => 'EventEvent')); $this->hasMany('Event as SecondRelatedEvents', array('local' => 'second_event_id', 'foreign' => 'first_event_id', 'refClass' => 'EventEvent')); But I would rather not have to use two relationships on the Event model. Is there a better way to do this?

    Read the article

  • In MVC framworks (such as Ruby on Rails), does usually Model spell as singular and controller and vi

    - by Jian Lin
    I usually see Ruby on Rails books using script/generate model Story name:string link:string which is a singular Story, while when it is controller script/generate controller Stories index then the Story now is Stories, which is plural. Is this a standard on Ruby on Rails? Is it true in other MVC frameworks too, like CakePHP, Symfony, Django, or TurboGears? I see that in the book Rails Space, the controller is also called User, which is the same as the model name, and it is the only exception I see. Update: also, when scaffold is done on Ruby on Rails, then automatically, the model is singular and the controller and view are both plural.

    Read the article

  • Should I call redirect() from within my Controller or Model in an MVC framework?

    - by justinl
    I'm using the MVC PHP framework Codeigniter and I have a straight forward question about where to call redirect() from: Controller or Model? Scenario: A user navigates to www.example.com/item/555. In my Model I search the item database for an item with the ID of 555. If I find the item, I'll return the result to my controller. However, if an item is not found, I want to redirect the user somewhere. Should this call to redirect() come from inside the model or the controller? Why?

    Read the article

  • Data structure for an ordered set with many defined subsets; retrieve subsets in same order

    - by Aaron
    I'm looking for an efficient way of storing an ordered list/set of items where: The order of items in the master set changes rapidly (subsets maintain the master set's order) Many subsets can be defined and retrieved The number of members in the master set grow rapidly Members are added to and removed from subsets frequently Must allow for somewhat efficient merging of any number of subsets Performance would ideally be biased toward retrieval of the first N items of any subset (or merged subset), and storage would be in-memory (and maybe eventually persistent on disk)

    Read the article

  • What's the difference between the box-model bug and the new box-sizing CSS?

    - by RKS
    In days (long past for some, and still present for others) the box-model bug was a bane to their existence. The idea that an element's width included the margin, border, and padding was blasphemous and an abomination to their senses. So we got away from it after thousands of internet blogs about the box-model hack. Now we get box-sizing, which will, wait for it, allow you to specify that a width contains the border, the margin, and the padding. We plaster a trendy new name for it, "CSS3 Flexbox," and now it's the freedom designers have been looking for. For those logical people who saw the box-model bug as not the bug and the W3C as the actual bug, this comes as a surprise. A reintroduction of this so-called bug and now we call it an enhancement? So can someone explain why this is different? I am honestly confused about this.

    Read the article

  • Dsquery nested groups

    - by Doctor Trout
    Hi there, How would I write a dsquery to get a list of all the members of a d-list, expanding any nested groups to get the members of those groups? I've written this: dsquery * -filter "(&(memberOf=cn=...))" -r -limit 0 -attr CUSTOMFIELD sAMAccountName displayName > export.txt but returns nested d-lists and I want to expand these. I then tried this: dsquery group -samid "NAME | dsget group -members -expand > export.txt But this just lists the OU of each member and I want to get the Account Name and a custom field returned. Is there any way, either of chosing which fields to return from dsget or to epxand dsquery to show nested group membership? Thanks.

    Read the article

  • Backbone.js Model change events in nested collections not firing as expected

    - by Pallavi Kaushik
    I'm trying to use backbone.js in my first "real" application and I need some help debugging why certain model change events are not firing as I would expect. I have a web service at /employees/{username}/tasks which returns a JSON array of task objects, with each task object nesting a JSON array of subtask objects. For example, [{ "id":45002, "name":"Open Dining Room", "subtasks":[ {"id":1,"status":"YELLOW","name":"Clean all tables"}, {"id":2,"status":"RED","name":"Clean main floor"}, {"id":3,"status":"RED","name":"Stock condiments"}, {"id":4,"status":"YELLOW","name":"Check / replenish trays"} ] },{ "id":47003, "name":"Open Registers", "subtasks":[ {"id":1,"status":"YELLOW","name":"Turn on all terminals"}, {"id":2,"status":"YELLOW","name":"Balance out cash trays"}, {"id":3,"status":"YELLOW","name":"Check in promo codes"}, {"id":4,"status":"YELLOW","name":"Check register promo placards"} ] }] Another web service allows me to change the status of a specific subtask in a specific task, and looks like this: /tasks/45002/subtasks/1/status/red [aside - I intend to change this to a HTTP Post-based service, but the current implementation is easier for debugging] I have the following classes in my JS app: Subtask Model and Subtask Collection var Subtask = Backbone.Model.extend({}); var SubtaskCollection = Backbone.Collection.extend({ model: Subtask }); Task Model with a nested instance of a Subtask Collection var Task = Backbone.Model.extend({ initialize: function() { // each Task has a reference to a collection of Subtasks this.subtasks = new SubtaskCollection(this.get("subtasks")); // status of each Task is based on the status of its Subtasks this.update_status(); }, ... }); var TaskCollection = Backbone.Collection.extend({ model: Task }); Task View to renders the item and listen for change events to the model var TaskView = Backbone.View.extend({ tagName: "li", template: $("#TaskTemplate").template(), initialize: function() { _.bindAll(this, "on_change", "render"); this.model.bind("change", this.on_change); }, ... on_change: function(e) { alert("task model changed!"); } }); When the app launches, I instantiate a TaskCollection (using the data from the first web service listed above), bind a listener for change events to the TaskCollection, and set up a recurring setTimeout to fetch() the TaskCollection instance. ... TASKS = new TaskCollection(); TASKS.url = ".../employees/" + username + "/tasks" TASKS.fetch({ success: function() { APP.renderViews(); } }); TASKS.bind("change", function() { alert("collection changed!"); APP.renderViews(); }); // Poll every 5 seconds to keep the models up-to-date. setInterval(function() { TASKS.fetch(); }, 5000); ... Everything renders as expected the first time. But at this point, I would expect either (or both) a Collection change event or a Model change event to get fired if I change a subtask's status using my second web service, but this does not happen. Funnily, I did get change events to fire if I added one additional level of nesting, with the web service returning a single object that has the Tasks Collection embedded, for example: "employee":"pkaushik", "tasks":[{"id":45002,"subtasks":[{"id":1..... But this seems klugey... and I'm afraid I haven't architected my app right. I'll include more code if it helps, but this question is already rather verbose. Thoughts?

    Read the article

  • MVC 3 Client Validation works intermittently

    - by Gutek
    I have MVC 3 version, System.Web.Mvc product version is: 3.0.20105.0 modified on 5th of Jan 2011 - i think that's the latest. I’ve notice that client validation is not working as it suppose in the application that we are creating, so I’ve made a quick test. I’ve created basic MVC 3 Application using Internet Application template. I’ve added Test Controller: using System.Web.Mvc; using MvcApplication3.Models; namespace MvcApplication3.Controllers { public class TestController : Controller { public ActionResult Index() { return View(); } public ActionResult Create() { Sample model = new Sample(); return View(model); } [HttpPost] public ActionResult Create(Sample model) { if(!ModelState.IsValid) { return View(); } return RedirectToAction("Display"); } public ActionResult Display() { Sample model = new Sample(); model.Age = 10; model.CardNumber = "1324234"; model.Email = "[email protected]"; model.Title = "hahah"; return View(model); } } } Model: using System.ComponentModel.DataAnnotations; namespace MvcApplication3.Models { public class Sample { [Required] public string Title { get; set; } [Required] public string Email { get; set; } [Required] [Range(4, 120, ErrorMessage = "Oi! Common!")] public short Age { get; set; } [Required] public string CardNumber { get; set; } } } And 3 views: Create: @model MvcApplication3.Models.Sample @{ ViewBag.Title = "Create"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @*@{ Html.EnableClientValidation(); }*@ @using (Html.BeginForm()) { @Html.ValidationSummary(false) <fieldset> <legend>Sample</legend> <div class="editor-label"> @Html.LabelFor(model => model.Title) </div> <div class="editor-field"> @Html.EditorFor(model => model.Title) @Html.ValidationMessageFor(model => model.Title) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Age) </div> <div class="editor-field"> @Html.EditorFor(model => model.Age) @Html.ValidationMessageFor(model => model.Age) </div> <div class="editor-label"> @Html.LabelFor(model => model.CardNumber) </div> <div class="editor-field"> @Html.EditorFor(model => model.CardNumber) @Html.ValidationMessageFor(model => model.CardNumber) </div> <p> <input type="submit" value="Create" /> </p> </fieldset> @*<fieldset> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> *@ } <div> @Html.ActionLink("Back to List", "Index") </div> Display: @model MvcApplication3.Models.Sample @{ ViewBag.Title = "Display"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Display</h2> <fieldset> <legend>Sample</legend> <div class="display-label">Title</div> <div class="display-field">@Model.Title</div> <div class="display-label">Email</div> <div class="display-field">@Model.Email</div> <div class="display-label">Age</div> <div class="display-field">@Model.Age</div> <div class="display-label">CardNumber</div> <div class="display-field">@Model.CardNumber</div> </fieldset> <p> @Html.ActionLink("Back to List", "Index") </p> Index: @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <p> @Html.ActionLink("Create", "Create") </p> <p> @Html.ActionLink("Display", "Display") </p> Everything is default here – Create Controller, AddView from controller action with model specified with proper scaffold template and using provided layout in sample application. When I will go to /Test/Create client validation in most cases works only for Title and Age fields, after clicking Create it works for all fields (create does not goes to server). However in some cases (after a build) Title validation is not working and Email is, or CardNumber or Title and CardNumber but Email is not. But never all validation is working before clicking Create. I’ve tried creating form with Html.EditorForModel as well as enforce client validation just before BeginForm: @{ Html.EnableClientValidation(); } I’m providing a source code for this sample on dropbox – as maybe our dev env is broken :/ I’ve done tests on IE 8 and Chrome 10 beta. Just in case, in web config validation scripts are enabled: <appSettings> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/> </appSettings> So my questions are Is there a way to ensure that Client validation will work as it supposed to work and not intermittently? Is this a desired behavior and I'm missing something in configuration/implementation?

    Read the article

  • Backbone.js Adding Model to Collection Issue

    - by jtmgdevelopment
    I am building a test application in Backbone.js (my first app using Backbone). The app goes like this: Load Data from server "Plans" Build list of plans and show to screen There is a button to add a new plan Once new plan is added, add to collection ( do not save to server as of now ) redirect to index page and show the new collection ( includes the plan you just added) My issue is with item 5. When I save a plan, I add the model to the collection then redirect to the initial view. At this point, I fetch data from the server. When I fetch data from the server, this overwrites my collection and my added model is gone. How can I prevent this from happening? I have found a way to do this but it is definitely not the correct way at all. Below you will find my code examples for this. Thanks for the help. PlansListView View: var PlansListView = Backbone.View.extend({ tagName : 'ul', initialize : function() { _.bindAll( this, 'render', 'close' ); //reset the view if the collection is reset this.collection.bind( 'reset', this.render , this ); }, render : function() { _.each( this.collection.models, function( plan ){ $( this.el ).append( new PlansListItemView({ model: plan }).render().el ); }, this ); return this; }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end NewPlanView Save Method var NewPlanView = Backbone.View.extend({ tagName : 'section', template : _.template( $( '#plan-form-template' ).html() ), events : { 'click button.save' : 'savePlan', 'click button.cancel' : 'cancel' }, intialize: function() { _.bindAll( this, 'render', 'save', 'cancel' ); }, render : function() { $( '#container' ).append( $( this.el ).html(this.template( this.model.toJSON() )) ); return this; }, savePlan : function( event ) { this.model.set({ name : 'bad plan', date : 'friday', desc : 'blah', id : Math.floor(Math.random()*11), total_stops : '2' }); this.collection.add( this.model ); app.navigate('', true ); event.preventDefault(); }, cancel : function(){} }); Router (default method): index : function() { this.container.empty(); var self = this; //This is a hack to get this to work //on default page load fetch all plans from the server //if the page has loaded ( this.plans is defined) set the updated plans collection to the view //There has to be a better way!! if( ! this.plans ) { this.plans = new Plans(); this.plans.fetch({ success: function() { self.plansListView = new PlansListView({ collection : self.plans }); $( '#container' ).append( self.plansListView.render().el ); if( self.requestedID ) self.planDetails( self.requestedID ); } }); } else { this.plansListView = new PlansListView({ collection : this.plans }); $( '#container' ).append( self.plansListView.render().el ); if( this.requestedID ) self.planDetails( this.requestedID ); } }, New Plan Route: newPlan : function() { var plan = new Plan({name: 'Cool Plan', date: 'Monday', desc: 'This is a great app'}); this.newPlan = new NewPlanView({ model : plan, collection: this.plans }); this.newPlan.render(); } FULL CODE ( function( $ ){ var Plan = Backbone.Model.extend({ defaults: { name : '', date : '', desc : '' } }); var Plans = Backbone.Collection.extend({ model : Plan, url : '/data/' }); $( document ).ready(function( e ){ var PlansListView = Backbone.View.extend({ tagName : 'ul', initialize : function() { _.bindAll( this, 'render', 'close' ); //reset the view if the collection is reset this.collection.bind( 'reset', this.render , this ); }, render : function() { _.each( this.collection.models, function( plan ){ $( this.el ).append( new PlansListItemView({ model: plan }).render().el ); }, this ); return this; }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end var PlansListItemView = Backbone.View.extend({ tagName : 'li', template : _.template( $( '#list-item-template' ).html() ), events :{ 'click a' : 'listInfo' }, render : function() { $( this.el ).html( this.template( this.model.toJSON() ) ); return this; }, listInfo : function( event ) { } });//end var PlanView = Backbone.View.extend({ tagName : 'section', events : { 'click button.add-plan' : 'newPlan' }, template: _.template( $( '#plan-template' ).html() ), initialize: function() { _.bindAll( this, 'render', 'close', 'newPlan' ); }, render : function() { $( '#container' ).append( $( this.el ).html( this.template( this.model.toJSON() ) ) ); return this; }, newPlan : function( event ) { app.navigate( 'newplan', true ); }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end var NewPlanView = Backbone.View.extend({ tagName : 'section', template : _.template( $( '#plan-form-template' ).html() ), events : { 'click button.save' : 'savePlan', 'click button.cancel' : 'cancel' }, intialize: function() { _.bindAll( this, 'render', 'save', 'cancel' ); }, render : function() { $( '#container' ).append( $( this.el ).html(this.template( this.model.toJSON() )) ); return this; }, savePlan : function( event ) { this.model.set({ name : 'bad plan', date : 'friday', desc : 'blah', id : Math.floor(Math.random()*11), total_stops : '2' }); this.collection.add( this.model ); app.navigate('', true ); event.preventDefault(); }, cancel : function(){} }); var AppRouter = Backbone.Router.extend({ container : $( '#container' ), routes : { '' : 'index', 'viewplan/:id' : 'planDetails', 'newplan' : 'newPlan' }, initialize: function(){ }, index : function() { this.container.empty(); var self = this; //This is a hack to get this to work //on default page load fetch all plans from the server //if the page has loaded ( this.plans is defined) set the updated plans collection to the view //There has to be a better way!! if( ! this.plans ) { this.plans = new Plans(); this.plans.fetch({ success: function() { self.plansListView = new PlansListView({ collection : self.plans }); $( '#container' ).append( self.plansListView.render().el ); if( self.requestedID ) self.planDetails( self.requestedID ); } }); } else { this.plansListView = new PlansListView({ collection : this.plans }); $( '#container' ).append( self.plansListView.render().el ); if( this.requestedID ) self.planDetails( this.requestedID ); } }, planDetails : function( id ) { if( this.plans ) { this.plansListView.close(); this.plan = this.plans.get( id ); if( this.planView ) this.planView.close(); this.planView = new PlanView({ model : this.plan }); this.planView.render(); } else{ this.requestedID = id; this.index(); } if( ! this.plans ) this.index(); }, newPlan : function() { var plan = new Plan({name: 'Cool Plan', date: 'Monday', desc: 'This is a great app'}); this.newPlan = new NewPlanView({ model : plan, collection: this.plans }); this.newPlan.render(); } }); var app = new AppRouter(); Backbone.history.start(); }); })( jQuery );

    Read the article

  • How to set MinWorkingSet and MaxWorkingSet in a 64-bit .NET process?

    - by Gravitas
    How do I set MinWorkingSet and MaxWorking set for a 64-bit .NET process? p.s. I can set the MinWorkingSet and MaxWorking set for a 32-bit process, as follows: [DllImport("KERNEL32.DLL", EntryPoint = "SetProcessWorkingSetSize", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern bool SetProcessWorkingSetSize(IntPtr pProcess, int dwMinimumWorkingSetSize, int dwMaximumWorkingSetSize); [DllImport("KERNEL32.DLL", EntryPoint = "GetCurrentProcess", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern IntPtr MyGetCurrentProcess(); // In main(): SetProcessWorkingSetSize(Process.GetCurrentProcess().Handle, int.MaxValue, int.MaxValue); Update: Unfortunately, even if we do this call, the garbage collection trims the working set down anyway, bypassing MinWorkingSet (see "Automatic GC.Collect() in the diagram below). Question: Is there a way to lock the WorkingSet (the green line) to 1GB, to avoid the spike in page faults (the red lines) that occur when allocating new memory into the process? p.s. Every time a page fault occurs, it blocks the thread for 250us, which hits application performance badly.

    Read the article

  • Mysql SET NAMES UTF8 - how to get rid of?

    - by Nir
    In a very busy PHP script we have a call at the beginning to "Set names utf8" which is setting the character set in which mysql should interpret and send the data back from the server to the client. http://dev.mysql.com/doc/refman/5.0/en/charset-applications.html I want to get rid of it so I set default-character-set=utf8 In our server ini file. (see link above) The setting seems to be working since the relevant server parameters are : 'character_set_client', 'utf8' 'character_set_connection', 'utf8' 'character_set_database', 'latin1' 'character_set_filesystem', 'binary' 'character_set_results', 'utf8' 'character_set_server', 'latin1' 'character_set_system', 'utf8' But after this change and commenting out set names utf8 call still the data starts to come out garbled. Please advise....

    Read the article

  • what is the underlying data structure of a set c++?

    - by zebraman
    I would like to know how a set is implemented in C++. If I were to implement my own set container without using the STL provided container, what would be the best way to go about this task? I understand STL sets are based on the abstract data structure of a binary search tree. So what is the underlying data structure? An array? Also, how does insert() work for a set? How does the set check whether an element already exists in it? I read on wikipedia that another way to implement a set is with a hash table. How would this work?

    Read the article

  • Identifying Data Model Changes Between EBS 12.1.3 and Prior EBS Releases

    - by Steven Chan
    The EBS 12.1.3 Release Content Document (RCD, Note 561580.1) summarizes the latest functional and technology stack-related updates in a specific release.  The E-Business Suite Electronic Technical Reference Manual (eTRM) summarizes the database objects in a specific EBS release.  Those are useful references, but sometimes you need to find out which database objects have changed between one EBS release and another.  This kind of information about the differences or deltas between two releases is useful if you have customized or extended your EBS instance and plan to upgrade to EBS 12.1.3. Where can you find that information?Answering that question has just gotten a lot easier.  You can now use a new EBS Data Model Comparison Report tool:EBS Data Model Comparison Report Overview (Note 1290886.1)This new tool lists the database object definition changes between the following source and target EBS releases:EBS 11.5.10.2 and EBS 12.1.3EBS 12.0.4 and EBS 12.1.3EBS 12.1.1 and EBS 12.1.3EBS 12.1.2 and EBS 12.1.3For example, here's part of the report comparing Bill of Materials changes between 11.5.10.2 and 12.1.3:

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • MODx character encoding

    - by Piet
    Ahhh character encodings. Don’t you just love them? Having character issues in MODx? Then probably the MODx manager character encoding, the character encoding of the site itself, your database’s character encoding, or the encoding MODx/php uses to talk to MySQL isn’t correct. The Website Encoding Your MODx site’s character encoding can be configured in the manager under Tools/Configuration/Site/Character encoding. This is the encoding your website’s visitors will get. The Manager’s Encoding The manager’s encoding can be changed by setting $modx_manager_charset at manager/includes/lang/<language>.inc.php like this (for example): $modx_manager_charset = 'iso-8859-1'; To find out what language you’re using (and thus was file you need to change), check Tools/Configuration/Site/Language (1 line above the character encoding setting). This needs to be the same encoding as your site. You can’t have your manager in utf8 and your site in iso-8859-1. Your Database’s Encoding The charset MODx/php uses to talk to your database can be set by changing $database_connection_charset in manager/includes/config.inc.php. This needs to be the same as your database’s charset. Make sure you use the correct corresponding charset, for iso-8859-1 you need to use ‘latin1′. Utf8 is just utf8. Example: $database_connection_charset = 'latin1'; Now, if you check Reports/System info, the ‘Database Charset’ might say something else. This is because the mysql variable ‘character_set_database’ is displayed here, which contains the character set used by the default database and not the one for the current database/connection. However, if you’d change this to display ‘character_set_connection’, it could still say something else because the ’set character set’ statement used by MODx doesn’t change this value either. The ’set names’ statement does, but since it turns out my MODx install works now as expected I’ll just leave it at this before I get a headache. If I saved you a potential headache or you think I’m totally wrong or overlooked something, let me know in the comments. btw: I want to be able to use a real editor with MODx. Somehow.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >