Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 793/2416 | < Previous Page | 789 790 791 792 793 794 795 796 797 798 799 800  | Next Page >

  • Need advice on framework design: how to make extending easy

    - by beginner_
    I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according "special data type" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / desgins for solving this issue. Any ideas?

    Read the article

  • Hijacking ASP.NET Sessions

    - by Ricardo Peres
    So, you want to be able to access other user’s session state from the session id, right? Well, I don’t know if you should, but you definitely can do that! Here is an extension method for that purpose. It uses a bit of reflection, which means, it may not work with future versions of .NET (I tested it with .NET 4.0/4.5). 1: public static class HttpApplicationExtensions 2: { 3: private static readonly FieldInfo storeField = typeof(SessionStateModule).GetField("_store", BindingFlags.NonPublic | BindingFlags.Instance); 4:  5: public static ISessionStateItemCollection GetSessionById(this HttpApplication app, String sessionId) 6: { 7: var module = app.Modules["Session"] as SessionStateModule; 8:  9: if (module == null) 10: { 11: return (null); 12: } 13:  14: var provider = storeField.GetValue(module) as SessionStateStoreProviderBase; 15:  16: if (provider == null) 17: { 18: return (null); 19: } 20:  21: Boolean locked; 22: TimeSpan lockAge; 23: Object lockId; 24: SessionStateActions actions; 25:  26: var data = provider.GetItem(HttpContext.Current, sessionId.Trim(), out locked, out lockAge, out lockId, out actions); 27:  28: if (data == null) 29: { 30: return (null); 31: } 32:  33: return (data.Items); 34: } 35: } As you can see, it extends the HttpApplication class, that is because we need to access the modules collection, for the Session module. Use with care!

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ArchBeat Link-o-Rama for October 18, 2013

    - by OTN ArchBeat
    Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Lucas Jellema. Evil Behind ChangeEventPolicy PPR in CRUD ADF 12c and WebLogic Stuck Threads | Andrejus Baranovskis The latest post from Oracle ACE Director Andrejus Baranovskis is a bit of a preview of his presentation at the upcoming UKOUG 2013 event. Podcast: Interview with authors of "Hudson Continuous Integration in Practice" For your listening pleasure... Here's an Oracle Author Podcast Interview with "Hudson Continuous Integration in Practice" authors Ed Burns and Winston Prakash. Manual Recovery Mechanisms in SOA Suite and AIA | Shreenidhi Raghuram Solution architect Shreenidhi Raghuram's post combines information from several sources to provide "a quick reference for Manual Recovery of Faults within the SOA and AIA contexts." Event: Harnessing Oracle Weblogic and Oracle Coherence This OTN Virtual Developer Day event features eight sessions in two tracks, with presentations and hands-on labs for developers and architects delivered by experts in Weblogic, Coherence, and ADF. Registration is free. November 5th, 2013. 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT Podcast: IoT Challenges and Opportunities - Part 2 Part 2 of the OTN ArchBeat Internet of Things podcast features a roundtable discussion of IoT challenges: massive data streams, security and privacy issues, evolving standards and protocols. Listen! Video: Design - ADF Architectural Patterns - Two for One Deal | Chris Muir Chris Muir explores the reuse of BTF workspaces across multiple applications and the advantages and disadvantages of reuse at the application level. Thought for the Day "Can't nothing make your life work if you ain't the architect." — Terry McMillan, American author (Born October 18, 1951) Source: brainyquote.com

    Read the article

  • Ensure Payroll Success with PeopleSoft Year-End Training for U.S. and Canada

    - by Breanne Cooley
    Year-end payroll processing and reporting is a requirement for your business. If you're responsible for completing these processes in either Canada or the United States using the PeopleSoft Payroll application, and if you're new to PeopleSoft Payroll or to performing these processes, consider enrolling in Oracle University's expert training. Our PeopleSoft Payroll specialists will guide you through the necessary steps to ensure you can smoothly and successfully perform your job. Training is specific to the country for which you are performing the processing and reporting. Training lasts one day and is delivered in our Live Virtual Class Format, which helps you avoid travel during this busy season. Here's the training we recommend: PeopleSoft Year-End Payroll - U.S. This course teaches you how to complete U.S. year-end processing and reporting using PeopleSoft Payroll for North America, step-by-step. Update tax reporting setup tables and update employees' income and tax records. Load each employee's year-end data into a single year-end record for processing and reporting.  Identify reports needed to reconcile the year-end data. Correct tax balances and other data as necessary. Generate final print and online W-2 forms and prepare the electronic file for the Social Security Administration.  Enter corrected W-2 information and print a W-2c form. Report periodic retirement distributions and related tax withholding amounts on form 1099-R.   Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. PeopleSoft Year-End Payroll – Canada This course covers the steps necessary to perform Canadian year-end processing using Oracle's PeopleSoft Payroll for North America. Explore adjustments, balances, year-end slip processing, common pitfalls and errors and balancing reports.  Produce accurate year-end reporting results such as T4, T4A, RL-1 and RL-2.  Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. See you in class! -Oracle University Marketing Team 

    Read the article

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • Http handler for classic ASP application for introducing a layer between client and server

    - by JPReddy
    I've a huge classic ASP application where in thousands of users manage their company/business data. Currently this is not multi-user so that application users can create users and authorize them to access certain areas in the system. I'm thinking of writing a handler which will act as middle man between client and server and go through every request and find out who the user is and whether he is authorized to access the data he is trying to. For the moment ignore about the part how I'm going to check the authorization and all that stuff. Just want to know whether I can implement a ASP.net handler and use it as middle man for the requests coming for a asp website? I just want to read the url and see what is the page user is trying to access and what are the parameters he is passing in the url the posted data. Is this possible? I read that Asp.net handler cannot be used with asp website and I need to use isapi filter or extensions for that and that can be developed only c/c++. Can anybody through some light on this and guide me whether I'm in the right direction or not?

    Read the article

  • Oracle as a Service in the Cloud

    - by Jason Williamson
    This should really be a Tweet, but I guess I'm writing a bit more. In theme of data migration and legacy modernization, I am seeing more and more of a push to consolidate data sources, especially from non-oracle to oracle in an effort to save dollars. From a modernization perspective, this fits in quite well. We are able to migrate things like Terradata, Sybase and DB2 and put that all into an Oracle database and then provide that as a OaaS (Oracle as a Service) Cloud offering. This seems to be a growing trend, and not unlike the cool RDS Amazon Database cloud being built on Oracle as well. We again find ourselves back in the similar theme of migration, however. The target technology sounds like a winner (COBOL to Java/SOA...DB2/Datacom/Adabas to Oracle) but the age-old problem of how to get there still persists. It it not trivial to migrate large amounts of pre-relational or "devolved" relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have. I'm working on a couple of new sections and chapters for a book that Tom and Prakash and I are writing around Database Migration and Consolidation. I'll share some snipits shortly.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • The MDM Journey: From the Customer Perspective

    - by Mala Narasimharajan
    Master Data Management is more than just about a single version of  the truth or providing a 360 degree view of the customer.  It spans multiple domains ranging from customers to suppliers to products and beyond.  MDM is pivotal to providing a solid customer experience - one that results in repeat business, continued loyalty and last but not least - high customer satisfaction.  Customer experience is not only defined as accurate information about the customer for the enterprise, but also presenting the customer with the right information about products, orders, product availability, etc.   Let's take a look at a couple of customer use cases with Oracle MDM. Below is a picture from a recent customer panel: Oracle MDM is a key platform for increasing upsell/cross-sell opportunities, improve targeting of customers and uncover new sales opportunies, reduce inaccuracies in mailing marketing materials to prospects, as well as to tap into and uncover the full value of a customer across business units more accurately.  A leading investment and private bank leverages Oracle MDM to do a better job of identifying clients, their levels of investment as well as consistently manage them through a series of areas such as credit, risk, new accounts, etc. Ultimately, they are looking to understand client investments and touchpoints across the company's offerings.  Another use case for Oracle MDM is with a major financial and insurance services company with clients worldwide, looking to resolve customer data inaccuracies and client information stored differently across mulitiple systems.  For more information on Oracle Master Data Management, click here.  

    Read the article

  • iphone problem receiving UDP packets

    - by SooDesuNe
    I'm using sendto() and recvfrom() to send some simple packets via UDP over WiFI. I've tried using two phones, and a simulator, the results I'm getting are: Packets sent from phones - recieved by simulator Packets sent from simulator - simulator recvfrom remains blocking. Packets sent from phones - other phone recvfrom remains blocking. I'm not sure how to start debugging this one, since the simulator/mac is able to receive the the packets, but the phones don't appear to be getting the message. A slight aside, do I need to keep my packets below the MTU for my network? Or is fragmentation handled by the OS or some other lower level software? UPDATE: I forgot to include the packet size and structure. I'm transmitting: typedef struct PacketForTransmission { int32_t packetTypeIdentifier; char data[64]; // size to fit my biggest struct } PacketForTransmission; of which the char data[64] is: typedef struct PacketHeader{ uint32_t identifier; uint32_t datatype; } PacketHeader; typedef struct BasePacket{ PacketHeader header; int32_t cardValue; char sendingDeviceID[41]; //dont forget to save room for the NULL terminator! } BasePacket; typedef struct PositionPacket{ BasePacket basePacket; int32_t x; int32_t y; } PositionPacket; sending packet is like: PositionPacket packet; bzero(&packet, sizeof(packet)); //fill packet with it's associated data PacketForTransmission transmissionPacket; transmissionPacket.packetTypeIdentifier = kPositionPacketType; memcpy(&transmissionPacket.data, (void*)&packet, sizeof(packet)); //put the PositionPacket into data[64] size_t sendResult = sendto(_socket, &transmissionPacket, sizeof(transmissionPacket), 0, [address bytes], [address length]); NSLog(@"packet sent of size: %i", sendResult); and recieving packets is like: while(1){ char dataBuffer[8192]; struct sockaddr addr; socklen_t socklen = sizeof(addr); ssize_t len = recvfrom(_socket, dataBuffer, sizeof(dataBuffer), 0, &addr, &socklen); //continues blocking here NSLog(@"packet recieved of length: %i", len); //do some more stuff }

    Read the article

  • 9 Gigapixel Photo Captures 84 Million Stars

    - by Jason Fitzpatrick
    The European Southern Observatory has released an absolutely enormous picture of the center of the Milky Way captured by their VISTA telescope–the image is 9 gigapixels and captures over 84 million stars. From the press release: The large mirror, wide field of view and very sensitive infrared detectors of ESO’s 4.1-metre Visible and Infrared Survey Telescope for Astronomy (VISTA) make it by far the best tool for this job. The team of astronomers is using data from the VISTA Variables in the Via Lactea programme (VVV), one of six public surveys carried out with VISTA. The data have been used to create a monumental 108 200 by 81 500 pixel colour image containing nearly nine billion pixels. This is one of the biggest astronomical images ever produced. The team has now used these data to compile the largest catalogue of the central concentration of stars in the Milky Way ever created. Want to check out all 9 billion glorious pixels in their uncompressed state? Be prepared to wait a bit, the uncompressed image is available for download but it weighs in at a massive 24.6GB. 84 Million Stars and Counting [via Wired] How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • OAuth2 vs Public API

    - by Adam Tannon
    My understanding of OAuth (2.0) is that its a software stack and protocol to allow 2+ web apps to share information about a single end user. User A is a member of Site B and Site C; Site B wants to fetch some data from Site C about User A, and this is where OAuth steps in. So first off, if this assessment is incorrect, please begin by clarifying this for me and correcting me! Assuming I'm on the right track, then I guess I'm not seeing the need for OAuth to begin with (!). I'm sure I'm just not seeing the "forest through the trees" here, but the way I see it, couldn't Site C just expose a public API that Site B could use to fetch the same data (sans OAuth)? If Site C required user credentials to access the data, could this public API just use HTTPS for secure transport and require username/password as a part of each API call? Again, I'm sure I'm missing something, but I'm just not understanding why I would need OAuth when a secure, public API written and exposed by Site C seems more than capable of delivering what Site B needs regarding User A. In general, I'm looking for a set of guidelines to go by when deciding to choose between using OAuth for my web apps or just writing my own web service ( exposing public API). Thanks in advance!

    Read the article

  • java multipart POST library

    - by tom
    Is there a multipart POST library out there that achieve the same effect of doing a POST from a html form? for example - upload a file programmingly in Java versus upload the file using a html form. And on the server side, it just blindly expect the request from client side to be a multipart POST request and parse out the data as appropriate. Has anyone tried this? specifically, I am trying to see if I can simulate the following with Java The user creates a blob by submitting an HTML form that includes one or more file input fields. Your app sets blobstoreService.createUploadUrl() as the destination (action) of this form, passing the function a URL path of a handler in your app. When the user submits the form, the user's browser uploads the specified files directly to the Blobstore. The Blobstore rewrites the user's request and stores the uploaded file data, replacing the uploaded file data with one or more corresponding blob keys, then passes the rewritten request to the handler at the URL path you provided to blobstoreService.createUploadUrl(). This handler can do additional processing based on the blob key. Finally, the handler must return a headers-only, redirect response (301, 302, or 303), typically a browser redirect to another page indicating the status of the blob upload. Set blobstoreService.createUploadUrl as the form action, passing the application path to load when the POST of the form is completed. <body> <form action="<%= blobstoreService.createUploadUrl("/upload") %>" method="post" enctype="multipart/form-data"> <input type="file" name="myFile"> <input type="submit" value="Submit"> </form> </body> Note that this is how the upload form would look if it were created as a JSP. The form must include a file upload field, and the form's enctype must be set to multipart/form-data. When the user submits the form, the POST is handled by the Blobstore API, which creates the blob. The API also creates an info record for the blob and stores the record in the datastore, and passes the rewritten request to your app on the given path as a blob key.

    Read the article

  • Dynamic Multiple Choice (Like a Wizard) - How would you design it? (e.g. Schema, AI model, etc.)

    - by henry74
    This question can probably be broken up into multiple questions, but here goes... In essence, I'd like to allow users to type in what they would like to do and provide a wizard-like interface to ask for information which is missing to complete a requested query. For example, let's say a user types: "What is the weather like in Springfield?" We recognize the user is interested in weather, but it could be Springfield, Il or Springfield in another state. A follow-up question would be: What Springfield did you want weather for? 1 - Springfield, Il 2 - Springfield, Wi You can probably think of a million examples where a request is missing key data or its ambiguous. Make the assumption the gist of what the user wants can be understood, but there are missing pieces of data required to complete the request. Perhaps you can take it as far back as asking what the user wants to do and "leading" them to a query. This is not AI in the sense of taking any input and truly understanding it. I'm not referring to having some way to hold a conversation with a user. It's about inferring what a user wants, checking to see if there is an applicable service to be provided, identifying the inputs needed and overlaying that on top of what's missing from the request, then asking the user for the remaining information. That's it! :-) How would you want to store the information about services? How would you go about determining what was missing from the input data? My thoughts: Use regex expressions to identify clear pieces of information. These will be matched to the parameters of a service. Figure out which parameters do not have matching data and look up the associated question for those parameters. Ask those questions and capture answers. Re-run the service passing in the newly captured data. These would be more free-form questions. For multiple choice, identify the ambiguity and search for potential matches ranked in order of likelihood (add in user history/preferences to help decide). Provide the top 3 as choices. Thoughts appreciated. Cheers, Henry

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

  • base64-Encoding breaks smime-encrypted emaildata

    - by Streuner
    I'm using Mime::Lite to create and send E-Mails. Now I need to add support for S/Mime-encryption and finally could encrypt my E-Mail (the only Perllib I could install seems broken, so I'm using a systemcall and openssl smime), but when I try to create a mime-object with it, the E-Mail will be broken as soon as I set the Content-Transfer-Encoding to base64. To make it even more curious, it happens only if I set it via $myMessage->attr. If I'm using the constructor -new everything is fine, besides a little warning which I suppress by using MIME::Lite->quiet(1); Is it a bug or my fault? Here are the two ways how I create the mime-object. Setting the Content-Transfer-Encoding via construtor and suppress the warning: MIME::Lite->quiet(1); my $msgEncr = MIME::Lite->new(From =>'[email protected]', To => '[email protected]', Subject => 'SMIME Test', Data => $myEncryptedMessage, 'Content-Transfer-Encoding' => 'base64'); $msgEncr->attr('Content-Disposition' => 'attachment'); $msgEncr->attr('Content-Disposition.filename' => 'smime.p7m'); $msgEncr->attr('Content-Type' => 'application/x-pkcs7-mime'); $msgEncr->attr('Content-Type.smime-type' => 'enveloped-data'); $msgEncr->attr('Content-Type.name' => 'smime.p7m'); $msgEncr->send; MIME::Lite->quiet(0); Setting the Content-Transfer-Encoding via $myMessage->attr which breaks the encrypted Data, but won't cause a warning: my $msgEncr = MIME::Lite->new(From => '[email protected]', To => '[email protected]', Subject => 'SMIME Test', Data => $myEncryptedMessage); $msgEncr->attr('Content-Disposition' => 'attachment'); $msgEncr->attr('Content-Disposition.filename' => 'smime.p7m'); $msgEncr->attr('Content-Type' => 'application/x-pkcs7-mime'); $msgEncr->attr('Content-Type.smime-type' => 'enveloped-data'); $msgEncr->attr('Content-Type.name' => 'smime.p7m'); $msgEncr->attr('Content-Transfer-Encoding' => 'base64'); $msgEncr->send; I just don't get why my message is broken when I'm using the attribute-setter. Thanks in advance for your help! Besides that i'm unable to attach any file to this E-Mail without breaking the encrypted message again.

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

  • Best design for a "Command Executer" class

    - by Justin984
    Sorry for the vague title, I couldn't think of a way to condense the question. I am building an application that will run as a background service and intermittently collect data about the system its running on. A second Android controller application will query the system over tcp/ip for statistics about the system. Currently, the background service has a tcp listener class that reads/writes bytes from a socket. When data is received, it raises an event to notify the service. The service takes the bytes, feeds them into a command parser to figure out what is being requested, and then passes the parsed command to a command executer class. When the service receives a "query statistics" command, it should return statistics over the tcp/ip connection. Currently, all of these classes are fully decoupled from each other. But in order for the command executer to return statistics, it will obviously need access to the socket somehow. For reasons I can't completely articulate, it feels wrong for the command executer to have a direct reference to the socket. I'm looking for strategies and/or design patterns I can use to return data over the socket while keeping the classes decoupled, if this is possible. Hopefully this makes sense, please let me know if I can include any info that would make the question easier to understand.

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

< Previous Page | 789 790 791 792 793 794 795 796 797 798 799 800  | Next Page >