Search Results

Search found 67075 results on 2683 pages for 'data model'.

Page 385/2683 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • How can I get data off of a (probably) very corrupt drive?

    - by Mercury1964
    So, my younger brother wanted my help today. Apparently, he was helping a friend install Ubuntu to his laptop, and midway through the install (on a filled hard-drive with Windows 8, from the liveusb thing): 1) They realized that they had chosen "Erase and install" accidentally and decided the best course of action was to (in the middle of an install, remember) 2) Force shutdown the computer. After I had replaced my eyeballs back into their sockets, my bro asked if I could do anything about his friend's data, which he wanted back. This, however, drifts out of my normal comfort zone. I know this about the install: 1) It was on a fairly new Windows 8 laptop, so it had whatever filesystem it uses nowadays on the entire drive 2) They didn't choose the "zero out drive data" option 3) They stopped the install at some point (not sure if before or after wipe) I can imagine that it now has two corrupted filesystems on it (whatever Windows 8 was on and ext3) and that some of the data still exists (assuming it wasn't overwritten already). Is there anything that can be done for whatever data is left on the drive?

    Read the article

  • Can I use all my RAM for application data?

    - by gsedej
    Hi! I have yet another question about "where is my Linux memory" Question goes: can I use cache for application data? On my laptop I have 1GB ram. Situation after some time of work: browser takes 400MB and all other apps caa 300MB (quickly summed in system monitor). System monitor says I use 90% of RAM and I have already 200MB on swap. Laptop is getting slower when I start new things (e.g. open new tab in browser or open new Nautilus window). probably putting memory on swap So there should be 1200MB (ram+swap) used but all app I see uses only 600MB. Where are other 600MB? Out of this 600MB there is 400MB real RAM. I am not copying or any other massive IO activity. I read about Linux smartly uses all ram it has using buffers and cache. So, kernel (cache) uses 300MB. What if I don't want to have disk mirrored and I want to use memory for application data (e.g. new browser tab)? I don't need 200MB of mirrored disk data, because I (for example) won't use open the same photos on data partition I just seen. So can I use all my RAM for application data? (including browser, desktop, xorg, other services). How?

    Read the article

  • Telerik Releases a new Visual Entity Designer

    Love LINQ to SQL but are concerned that it is a second class citizen? Need to connect to more databases other than SQL Server? Think that the Entity Framework is too complex? Want a domain model designer for data access that is easy, yet powerful? Then the Telerik Visual Entity Designer is for you. Built on top of Telerik OpenAccess ORM, a very mature and robust product, Teleriks Visual Entity Designer is a new way to build your domain model that is very powerful and also real easy to use. How easy? Ill show you here. First Look: Using the Telerik Visual Entity Designer To get started, you need to install the Telerik OpenAccess ORM Q1 release for Visual Studio 2008 or 2010. You dont need to use any of the Telerik OpenAccess wizards, designers, or using statements. Just right click on your project and select Add|New Item from the context menu. Choose Telerik OpenAccess Domain Model from the Visual Studio project templates. (Note to existing OpenAccess users, dont run the Enable ORM wizard or any other OpenAccess menu unless you are building OpenAccess Entities.) You will then have to specify the database backend (SQL Server, SQL Azure, Oracle, MySQL, etc) and connection. After you establish your connection, select the database objects you want to add to your domain model. You can also name your model, by default it will be NameofyourdatabaseEntityDiagrams. You can click finish here if you are comfortable, or tweak some advanced settings. Many users of domain models like to add prefixes and suffixes to classes, fields, and properties as well as handle pluralization. I personally accept the defaults, however, I hate how DBAs force underscores on me, so I click on the option to remove them. You can also tweak your namespace, mapping options, and define your own code generation template to gain further control over the outputted code. This is a very powerful feature, but for now, I will just accept the defaults.   When we click finish, you can see your domain model as a file with the .rlinq extension in the Solution Explorer. You can also bring up the visual designer to view or further tweak your model by double clicking on the model in the Solution Explorer.  Time to use the model! Writing a LINQ Query Programming against the domain model is very simple using LINQ. Just set a reference to the model (line 12 of the code below) and write a standard LINQ statement (lines 14-16).  (OpenAccess users: notice the you dont need any using statements for OpenAccess or an IObjectScope, just raw LINQ against your model.) 1: using System; 2: using System.Linq; 3: //no need for anOpenAccess using statement 4:   5: namespace ConsoleApplication3 6: { 7: class Program 8: { 9: static void Main(string[] args) 10: { 11: //a reference tothe data context 12: NorthwindEntityDiagrams dat = new NorthwindEntityDiagrams(); 13: //LINQ Statement 14: var result = from c in dat.Customers 15: where c.Country == "Germany" 16: select c; 17:   18: //Print out the company name 19: foreach (var cust in result) 20: { 21: Console.WriteLine("CompanyName: " + cust.CompanyName); 22: } 23: //keep the consolewindow open 24: Console.Read(); 25: } 26: } 27: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Lines 19-24 loop through the result of our LINQ query and displays the results. Thats it! All of the super powerful features of OpenAccess are available to you to further enhance your experience, however, in most cases this is all you need. In future posts I will show how to use the Visual Designer with some other scenarios. Stay tuned. Enjoy! Technorati Tags: Telerik,OpenAccess,LINQ Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What to include in metadata?

    - by shyam
    I'm wondering if there are any general guidelines or best practices regarding when to split data into a metadata format, as oppose to directly embedding it within the data. (Specific example below). My understanding of metadata is that it describes data (without the need to actually look at the data), allowing for data to be quickly search/filtered for easy access. Let's take for example a simple 3D model format. The actual data file itself is a binary file containing vertices and colors. Things like creation date, modified data and author name would be things that describe the binary data, so I would say these belong as metadata (outside of the binary file). But what if the application had no need to search or filter by these fields? Would it be acceptable to embed these fields directly into the binary data itself? Could they be duplicated in both the binary data and the meta data, or would this be considered bad practice? What about more ambiguous fields such as the model name, which could be considered part of the data itself, but also as data describing the binary data?... How do you decide which data to embed in the actual binary file, as opposed to separating into a more flexible metadata format? Thanks!

    Read the article

  • Should data structures be integrated into the language (as in Python) or be provided in the standard library (as in Java)?

    - by Anto
    In Python, and most likely many other programming languages, common data structures can be found as an integrated part of the core language with their own dedicated syntax. If we put LISP's integrated list syntax aside, I can't think of any other languages that I know which provides some kind of data structure above the array as an integrated part of their syntax, though all of them (but C, I guess) seem to provide them in the standard library. From a language design perspective, what are your opinions on having a specific syntax for data structures in the core language? Is it a good idea, and does the purpose of the language (etc.) change how good this could be of a choice? Edit: I'm sorry for (apparently) causing some confusion about which data structures I mean. I talk about the basic and commonly used ones, but still not the most basic ones. This excludes trees (too complex, uncommon), stacks (too seldom used), arrays (too simple) but includes e.g. sets, lists and hashmaps.

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

  • How to store 250TB of data and develop a backup/recovery plan?

    - by luccio
    I'm really new to this topic, so big apology for stupid questions. I have a school project and I want to know how to store 250TB of data with life-cycle for 18 months. It means every record is stored for 18 months and after this period of time can be deleted. There are 2 issues: store data backup data Due to amount of data I will probably need to combine data tapes and hard drives. I'd like to have "fast" access to 3 month old data, so ~42TB on disk. I really don't know what RAID should I use, or is here any better solution than combining disk and data tapes? Thanks for any advice, article, anything. I'm getting lost.

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Shrink system volume, create new partition, & enlarge current partitions w/o losing data?

    - by studiohack
    My machine has a 300 GB hard drive. I'm planning to reinstall Windows 7 on this machine soon. It has three partitions: a System partition, and two equal data partitions for particular types of data (music, photos, docs, etc). These data partitions are too small for the data they hold, and the system partition is large enough to be shrunk. I want to create four partitions in preparation for the fresh reinstall, a System partition, and three data partitions of varying sizes (40, 50, & 60 GB respectively). My question: Can I shrink the System volume, create another partition, and make the existing two data partitions bigger without losing any data that is currently on the hard drive? Or is it better to format the whole thing, and create partitions after (thus removing the current OS) with a partitioning tool booting from CD? Can I reformat and create partitions with the Windows 7 set up DVD?

    Read the article

  • In which directory to write game save files/data?

    - by Klaim
    I need a definite list of directories, one or more per platform, where to put game save files and other game generated data. Either based no the OS developer specification, or because it is common usage if there is no recommandation. Please provide one answer per platform, with different directories. Also, example of how to get the directory location in C++ or C is best, as it's the language you'll have more hard time. Locations: Player's game data (saved games, config). Shared game data (like high-score or config for all computer users). Temporary game data (aka cache directory).

    Read the article

  • Syncing client and server CRUD operations using json and php

    - by Justin
    I'm working on some code to sync the state of models between client (being a javascript application) and server. Often I end up writing redundant code to track the client and server objects so I can map the client supplied data to the server models. Below is some code I am thinking about implementing to help. What I don't like about the below code is that this method won't handle nested relationships very well, I would have to create multiple object trackers. One work around is for each server model after creating or loading, simply do $model->clientId = $clientId; IMO this is a nasty hack and I want to avoid it. Adding a setCientId method to all my model object would be another way to make it less hacky, but this seems like overkill to me. Really clientIds are only good for inserting/updating data in some scenarios. I could go with a decorator pattern but auto generating a proxy class seems a bit involved. I could use a generic proxy class that uses a __call function to allow for original object data to be accessed, but this seems wrong too. Any thoughts or comments? $clientData = '[{name: "Bob", action: "update", id: 1, clientId: 200}, {name:"Susan", action:"create", clientId: 131} ]'; $jsonObjs = json_decode($clientData); $objectTracker = new ObjectTracker(); $objectTracker->trackClientObjs($jsonObjs); $query = $this->em->createQuery("SELECT x FROM Application_Model_User x WHERE x.id IN (:ids)"); $query->setParameters("ids",$objectTracker->getClientSpecifiedServerIds()); $models = $query->getResults(); //Apply client data to server model foreach ($models as $model) { $clientModel = $objectTracker->getClientJsonObj($model->getId()); ... } //Create new models and persist foreach($objectTracker->getNewClientObjs() as $newClientObj) { $model = new Application_Model_User(); .... $em->persist($model); $objectTracker->trackServerObj($model); } $em->flush(); $resourceResponse = $objectTracker->createResourceResponse(); //Id mappings will be an associtave array representing server id resources with client side // id. //This method Dosen't seem to flexible if we want to return additional data with each resource... //Would have to modify the returned data structure, seems like tight coupling... //Ex return value: //[{clientId: 200, id:1} , {clientId: 131, id: 33}];

    Read the article

  • Using Google Webmaster & Analytics, what data to look at to improve website performance?

    - by Rob
    Using data from Google Analytics and Webmaster tools, what data should I be looking at to improve my websites performance? I want to improve the SEO, usability and just general performance of my website. EDIT: It's a portfolio website that we've done the initial SEO for, also optimised all images etc and made the site as fast as possible. What kind of things should I be looking out for in the analytics and webmaster data to improve performance for both the SEO and each individual page.

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • Time to Get Started with Oracle Data Integrator 12c!

    - by Sandrine Riley
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} It is time to get started with Oracle Data Integrator 12c! We would like to highlight for you a great place to begin your journey with ODI.  Here you will find the Getting Started section for ODI, which provides a few options. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Step 1 – Start by downloading the Getting Started (PDF). The document provides general background information and detailed examples to help you learn how to use Oracle Data Integrator. This document guides you through a first project with Oracle Data Integrator 12c, and the instructions in this document are required for using the Getting Started Demonstration Environment. If you would like to run the Getting Started Demonstration Environment on your system go to Step 2 A. If you would like to run the Getting Started Demonstration Environment installed and pre-configured in a Virtual Image, go to Step 2 B. Step 2 - A-   A -  If you would like to run the Getting Started Demo Environment on your system, you will want to then download the Demo Environment for Getting Started (ZIP) archive containing ODI metadata, configuration scripts and sample data. B-   B -  Alternatively, a pre-configured virtual machine is available with the Getting Started installation and configuration. The Virtual Machine platform uses Oracle Virtual Box technology, and use of this configuration would necessitate the download/installation of Virtual Box and the Getting Started virtual machine. If you prefer this route and would rather run the Getting Started Demonstration Environment installed and preconfigured in a Virtual Machine, please download the ODI 12c Getting Started Virtual Machine. Step 3 – In continuation with the Getting Started Demonstration Environment and the Getting Started Guide, you will now be able to run through a first project with Oracle Data Integrator. Now that you have successfully done the exercises above, you may want to play on your own, and develop with Oracle Data Integrator on your environment...  here you can download a full version of Oracle Data Integrator 12c.  Enjoy, and happy developing! Learn more about Oracle Data Integrator; including FAQ, the Oracle by Example Series, Sample Code, and White Papers. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • How to search unique dynamic data in a sheet and then copy relevent row in diffrent sheet?

    - by Hemant
    I am getting data from internet (DataFrom Web) In sheet1. Then I disperse that data in to three sheets based on three unique text. Like a,b and c. Rows are copied to sheet a,b and c sheets depending on text (a,b,c) they have. All the rows have one unique text (like url) by which they can be searched. I have added static data corresponding to the row. The problem is when ever internet data is changed (row addition/substitution or randomized). My static data loses its connection with the original row for which it was written. I want to search the data based on one unique key and put it to its original place where it used to be with static data.

    Read the article

  • How to reference individual cells in Excel to variable data from records in an external SQL table

    - by user273476
    I have a SQL table containing date oriented financial data eg. multiple daily records with fields for Date, Account code and Value. I want to set up dynamic links (formulas) from cells in an Excel speadsheet to this data so when the spreadsheet is loaded the data is fetched from all the relevant records. The spreadsheet has the Account codes on the x axis and Dates on the y. Each day the SQL table has new data in it for the new day and I want the spreadsheet to reference this new data for the column for the new day. Any ideas? I have seen how you can generally bring in data from a SQL table (in our case using ODBC as it is not MS SQL) but the data is not simply bringing in multiple records as you would a CVS file but specific records in the SQL table referencing to specific cells and columns in the spreadsheet.

    Read the article

  • Is there an application or method to log of data transfers?

    - by Gaurav_Java
    My friend asked me for some files that I let him take from my system. I did not see he doing that. Then I was left with a doubt: what extra files or data did he take from my system? I was thinking is here any application or method which shows what data is copied to which USB (if name available then shows name or otherwise device id) and what data is being copied to Ubuntu machine . It is some like history of USB and System data. I think this feature exists in KDE This will really useful in may ways. It provides real time and monitoring utility to monitor USB mass storage devices activities on any machine.

    Read the article

  • JMeter: how to asign a single distinct value from CSV Data Set Config to each thread in thread group?

    - by JohnnyM
    I have to make a load test for a relatively large number of users so I cant realy use User Parameters pre-processor to parametrize each thread with custom user data. I've read that I should use CSV Data Set Config instead. However I run into a problem with how JMeter interprets the input of this Config. Example: I have a thread group of 3 threads and Loop Count:10 with one HTTP request sampler with server www.example.com and path: \${user}. The csv file (bullet is a single line in file) for CSV Data Set Config to extract the user parameter: 1 2 3 4 5 Expected output is that for thread 1-x the path of the request should be: \x. So the output file should consist of 10 samples per thread namely: for thread 1-1 : 10 requests to www.example.com\1 for thread 1-2 : 10 requests to www.example.com\2 for thread 1-3 : 10 requests to www.example.com\3 but instead i get requests to each \1 - \5 and then to EOF. Does anyone know how to achieve the expected effect with CSV Data Set Config in jmeter 2.9?

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

  • Is there a way to determine if a database has been altered, then to push new data to the Application

    - by TeamGB
    All our (my company) currently applications pull information from that database, is their a way to get the following types of databses to either push data or push an event to allow the application to pull data. Access SQL Oracle File systems (Files and folders) The issue today is that most of our application spend a large amount of time constantly looking at databases and file system checking to see if data has changed . It would be better for the database to inform the application when data has changed. Are there tools within Visual studio to allow this or are there tools within the database / filesystem to do this? All ready asked this on stack overflow but go no answer. I've been doing some more research but I cant seem to get any further. My manager has asked to investigate it as it would mean our applications are much quicker and efficient.

    Read the article

  • Can I share data between two users in an ASP.NET Application?

    - by Dave
    I have an issue with Roles.IsUserInRole function. It take hell amount of time to just check if the logged-in user is in particular role(typ 3-9 sec). I searched to find a solution and arrived on this but I am not sure If I have fully grasped it. What I got from the above, A new derived class is created. Inside that class, there is a list which retrieves all user at once. The next time you check IsUserInRole, you do not use the actual IsUserInRole method but rather use the one you overrode in your class. Is this the correct description? Am I on track? My question is, can data be share between two different users in ASP.NET application? If yes, will the shared data exist only if there is at least one user logged in. If all users logs out, that shared data is destroyed? My point is this data will be created only one time whenever a user logs in. For all subsequent users they can use this data and check their roles against the list? I need a detailed answer. My application has users and different roles. We are using ASP.NET roles.

    Read the article

  • How to intercept raw soap request/response (data) from WCF client

    - by JohnIdol
    This question seems to be pretty close to what I am looking for - I was able to setup tracing and I am looking at the log entries for my calls to the service. However I need to see the raw soap request with the data I am sending to the service and I see no way of doing that from the SvcTraceViewer (only log entries are shown but no data sent to the service) - am I just missing configuration? Here's what I got in my web.config: <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Verbose" propagateActivity="true"> <listeners> <add name="sdt" type="System.Diagnostics.XmlWriterTraceListener" initializeData="App_Data/Logs/WCFTrace.svclog" /> </listeners> </source> </sources> </system.diagnostics> Any help appreciated! UPDATE: this is all I see in my trace: <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>262163</EventID> <Type>3</Type> <SubType Name="Information">0</SubType> <Level>8</Level> <TimeCreated SystemTime="2010-05-10T13:10:46.6713553Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{00000000-0000-0000-1501-0080000000f6}" /> <Execution ProcessName="w3wp" ProcessID="3492" ThreadID="23" /> <Channel /> <Computer>MY_COMPUTER_NAME</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Channels.MessageSent.aspx</TraceIdentifier> <Description>Sent a message over a channel.</Description> <AppDomain>MY_DOMAIN</AppDomain> <Source>System.ServiceModel.Channels.HttpOutput+WebRequestHttpOutput/50416815</Source> <ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/MessageTraceRecord"> <MessageProperties> <Encoder>text/xml; charset=utf-8</Encoder> <AllowOutputBatching>False</AllowOutputBatching> <Via>http://xxx.xx.xxx.xxx:9080/MyWebService/myService</Via> </MessageProperties> <MessageHeaders></MessageHeaders> </ExtendedData> </TraceRecord> </DataItem> </TraceData> </ApplicationData>

    Read the article

  • WPF MenuItem.Command binding to ElementName results to System.Windows.Data Error: 4 : Cannot find so

    - by e28Makaveli
    I have the following XAML: <UserControl x:Class="EMS.Controls.Dictionary.TOCControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:vm="clr-namespace:EMS.Controls.Dictionary.Models" xmlns:diagnostics="clr-namespace:System.Diagnostics;assembly=WindowsBase" x:Name="root" > <TreeView x:Name="TOCTreeView" Background="White" Padding="3,5" ContextMenuOpening="TOCTreeView_ContextMenuOpening" ItemsSource="{Binding Children}" BorderBrush="{x:Null}" > <TreeView.ItemTemplate> <HierarchicalDataTemplate ItemsSource="{Binding Path=Children, Mode=OneTime}"> <Grid > <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <!--<ColumnDefinition Width="Auto"/>--> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <!--<CheckBox VerticalAlignment="Center" IsChecked="{Binding IsVisible}"/>--> <ContentPresenter Grid.Column="0" Height="16" Width="20" Content="{Binding LayerRepresentation}" /> <!--<ContentPresenter Grid.Column="1" > <ContentPresenter.Content> Test </ContentPresenter.Content> </ContentPresenter>--> <TextBlock Grid.Column="2" FontWeight="Normal" Text="{Binding Path=Alias, Mode=OneWay}" > <ToolTipService.ToolTip> <TextBlock Text="{Binding Description}" TextWrapping="Wrap"/> </ToolTipService.ToolTip> </TextBlock> </Grid> <HierarchicalDataTemplate.ItemTemplate> <HierarchicalDataTemplate ItemsSource="{Binding Path=Children, Mode=OneTime}"> <!--<DataTemplate>--> <Grid > <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <CheckBox VerticalAlignment="Center" IsChecked="{Binding IsVisible}"/> <ContentPresenter Grid.Column="1" Content="{Binding LayerRepresentation, Mode=OneWay}" /> <TextBlock Margin="0,1,0,1" Text="{Binding Path=Alias, Mode=OneWay}" Grid.Column="2"> <ToolTipService.ToolTip> <TextBlock Text="{Binding Description}" TextWrapping="Wrap"/> </ToolTipService.ToolTip> </TextBlock> </Grid> <!--</DataTemplate>--> </HierarchicalDataTemplate> </HierarchicalDataTemplate.ItemTemplate> </HierarchicalDataTemplate> </TreeView.ItemTemplate> <TreeView.ContextMenu> <ContextMenu> <MenuItem Name="miRemove" Header="Remove" Command="{Binding ElementName=root, Path=RemoveItemCmd, diagnostics:PresentationTraceSources.TraceLevel=High}"> <MenuItem.Icon> <Image Source="../images/16x16/Delete.png"/> </MenuItem.Icon> </MenuItem> <MenuItem Header="Properties" Command="{Binding ElementName=root, Path=GetItemPropertiesCmd}"/> </ContextMenu> </TreeView.ContextMenu> </TreeView> </UserControl> Code behind for this UserControl has two ICommand properties with names: RemoveItemCmd and GetItemPropertiesCmd. However, I get System.Windows.Data Error: 4 : Cannot find source for binding with reference 'ElementName=root'. BindingExpression:Path=RemoveItemCmd; DataItem=null; target element is 'MenuItem' (Name='miRemove'); target property is 'Command' (type 'ICommand') System.Windows.Data Error: 4 : Cannot find source for binding with reference 'ElementName=root'. BindingExpression:Path=GetItemPropertiesCmd; DataItem=null; target element is 'MenuItem' (Name=''); target property is 'Command' (type 'ICommand') when UserControl is constructed. Why is this and how do I resolve?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >