Search Results

Search found 59579 results on 2384 pages for 'data loss'.

Page 54/2384 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • SubSonic Stored Procedure Issue - Data Generated at Stored Procedure is different from Data Received

    - by ShaShaIn
    Hi All, I am facing a unknown problem while using stored procedure with SubSonic. I have written a stored procedure & application code that takes first name & last name as input parameter and return last login id as ouput parameter. It creates login id as first character of first name & complete last name for no-existing login id otherwise it adds 1 in the last login id e.g. First Name - Mark, Last Name - Waugh, First Login Id - MWaugh, Second Login Id - MWaugh1, Third Login Id - MWaugh2 etc. Stored Procedure SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[Users_FetchLoginId] ( @FirstName nvarchar(64), @LastName nvarchar(64), @LoginId nvarchar(256) OUTPUT ) AS DECLARE @UserId nvarchar(256); SET @UserId = NULL; SET @LoginId = NULL; SELECT @UserId = LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName)) IF @@rowcount = 0 OR @UserId IS NULL BEGIN SET @LoginId = (SUBSTRING(@FirstName, 1, 1) + @LastName); print @LoginId RETURN 1; END ELSE BEGIN SELECT TOP 1 LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName + '%')) ORDER BY LoweredUserName DESC RETURN 2; END Application Code public string FetchLoginId(string firstName, string lastName) { SubSonic.StoredProcedure sp = SPs.UsersFetchLoginId( firstName, lastName, null ); sp.Command.AddReturnParameter(); sp.Execute(); if (sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue != System.DBNull.Value) { int returnCode = Convert.ToInt32(sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue, CultureInfo.InvariantCulture); if (returnCode == 1) { // UserName as First Character of First Name & Full Last Name return sp.Command.Parameters[2].ParameterValue.ToString(); } if (returnCode == 2) { DataSet ds = sp.GetDataSet(); if (null == ds || null == ds.Tables[0] || 0 == ds.Tables[0].Rows.Count) return ""; string maxLoginId = ds.Tables[0].Rows[0]["LoweredUserName"].ToString(); string initialLoginId = firstName.Substring(0, 1) + lastName; int maxLoginIdIndex = 0; int initialLoginIdLength = initialLoginId.Length; if (maxLoginId.Substring(initialLoginIdLength).Length == 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix (Here, First Incrementer i.e. 1) return (initialLoginId + maxLoginIdIndex); } if (int.TryParse(maxLoginId.Substring(initialLoginIdLength), out maxLoginIdIndex)) { if (maxLoginIdIndex > 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix return (initialLoginId + maxLoginIdIndex); } } } } Now the problem is for some input (see test data below), the login id created at sql server end correctly but at application subsonic dal side, it truncates some characters. First Name - Jenelia and Last Name - Kanupatikenalaalayampentyalavelugoplansubhramanayam [dbo].[Users_FetchLoginId] - Execute Stored Procedure Separately - Login Id Is Correct JKanupatikenalaalayampentyalavelugoplansubhramanayam public string FetchLoginId(string firstName, string lastName) - Application Code DAL Side - LginId Is Wrongly Received From Stored Procedure JKanupatikenalaalayampentyalavelugoplansubhramanay You can easily see that 2 charactes are removed. If the data is correctly generated by stored procedure then why the characters are removed when data is received in output parameter of stored procedure? Is it due to any internal known or unknown bug of SubSonic? Your help is significant. Thanks in advance...

    Read the article

  • Adding Core Data to existing iPhone project

    - by swalkner
    I'd like to add core data to an existing iPhone project, but I still get a lot of compile errors: NSManagedObjectContext undeclared Expected specifier-qualifier-list before 'NSManagedObjectModel' ... I already added the Core Data Framework to the target (right click on my project under "Targets", "Add" - "Existing Frameworks", "CoreData.framework"). My header-file: NSManagedObjectModel *managedObjectModel; NSManagedObjectContext *managedObjectContext; NSPersistentStoreCoordinator *persistentStoreCoordinator; [...] @property (nonatomic, retain, readonly) NSManagedObjectModel *managedObjectModel; @property (nonatomic, retain, readonly) NSManagedObjectContext *managedObjectContext; @property (nonatomic, retain, readonly) NSPersistentStoreCoordinator *persistentStoreCoordinator; What am I missing? Starting a new project is not an option... Thanks a lot! edit sorry, I do have those implementations... but it seems like the Library is missing... the implementation methods are full with compile error like "managedObjectContext undeclared", "NSPersistentStoreCoordinator undeclared", but also with "Expected ')' before NSManagedObjectContext" (although it seems like the parenthesis are correct)... #pragma mark - #pragma mark Core Data stack /** Returns the managed object context for the application. If the context doesn't already exist, it is created and bound to the persistent store coordinator for the application. */ - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } /** Returns the managed object model for the application. If the model doesn't already exist, it is created by merging all of the models found in application bundle. */ - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain]; return managedObjectModel; } /** Returns the persistent store coordinator for the application. If the coordinator doesn't already exist, it is created and the application's store added to it. */ - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"Core_Data.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; }

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • Core Data managed object context thread synchronisation

    - by Ben Reeves
    I'm have an issue where i'm updating a many-to-many relationship in a background thread, which works fine in that threa, but when I send the object back to the main thread the changes do not show. If I close the app and reopen the data is saved fine and the changes show on the main thread. Also using [context lock] instead of a different managed object context works fine. I have tried NSManagedObjectContext: - (BOOL)save:(NSError **)error; - (void)refreshObject:(NSManagedObject *)object mergeChanges:(BOOL)flag; at different stages throughout the process but it doesn't seem to help. My core data code uses the following getter to ensure any operations are thread safe: - (NSManagedObjectContext *) managedObjectContext { NSThread * thisThread = [NSThread currentThread]; if (thisThread == [NSThread mainThread]) { //Main thread just return default context return managedObjectContext; } else { //Thread safe trickery NSManagedObjectContext * threadManagedObjectContext = [[thisThread threadDictionary] objectForKey:CONTEXT_KEY]; if (threadManagedObjectContext == nil) { threadManagedObjectContext = [[[NSManagedObjectContext alloc] init] autorelease]; [threadManagedObjectContext setPersistentStoreCoordinator: [self persistentStoreCoordinator]]; [[thisThread threadDictionary] setObject:threadManagedObjectContext forKey:CONTEXT_KEY]; } return threadManagedObjectContext; } } and when I pass object between threads i'm using -(NSManagedObject*)makeSafe:(NSManagedObject*)object { if ([object managedObjectContext] != [self managedObjectContext]) { NSError * error = nil; object = [[self managedObjectContext] existingObjectWithID:[object objectID] error:&error]; if (error) { NSLog(@"Error makeSafe: %@", error); } } return object; } Any help appreciated

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • Handling multiple column data with Java

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time.

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • IIS error hosting WCF Data Service on shared web host

    - by jkohlhepp
    My client has a website hosted on a shared web server. I don't have access to IIS. I am trying to deploy a WCF Data Service onto his site. I am getting this error: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. I have searched SO and other sites quite a bit but can't seem to find someone with my exact situation. I cannot change the IIS settings because this is a third party's server and it is a shared web server. So my only option is to change things in code or in the service config. My service config looks like this: <system.serviceModel xdt:Transform="Insert"> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://www.somewebsite.com"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <bindings> <webHttpBinding> <binding name="{Binding Name}" > <security mode="None" /> </binding> </webHttpBinding> </bindings> <services> <service name="{Namespace to Service}"> <endpoint address="" binding="webHttpBinding" bindingConfiguration="{Binding Name}" contract="System.Data.Services.IRequestHandler"> </endpoint> </service> </services> </system.serviceModel> As you can see I tried to set the security mode to "None" but that didn't seem to help. What should I change to resolve this error?

    Read the article

  • Flex / Flash builder : no returning data using database

    - by Tristan
    Hello, i'm following some flex tutorials everything's working as wanted expected for one thing : When i use my function getServerByBrand($brand) there is no returned data into my datagrid and i don't know why because it uses the same schema as getAllserver() which is working . I don't know whether it's cause by the function itselft or the configuration in flash builder : protected function RechercheGSP_clickHandler(event:MouseEvent):void { getServerByBrandResult.token = dbClass.getServerByBrand(SearchInput.text); } Here's what i've got in Data/Services : getServerByBrand(brand : string) : Object And finally the function : public function getServerByBrand($brand) { $stmt = mysqli_prepare($this->connection, "SELECT DISTINCT * FROM $this->tablename where GSP_nom=? "); $this->throwExceptionOnError(); mysqli_stmt_execute($stmt); $this->throwExceptionOnError(); $rows = array(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); while (mysqli_stmt_fetch($stmt)) { $row->timestamp = new DateTime($row->timestamp); $rows[] = $row; $row = new stdClass(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); } mysqli_stmt_free_result($stmt); mysqli_close($this->connection); return $rows; } I tested the settings with configure return type and it tells me : "the operation returned a primitive "object". test settings : Parameters (brand) / Input type (String) / Value (woop) To conclude, there is no returned object at all. Do you see the problem ? Thanks

    Read the article

  • Single value data to multiple values of data in database relation

    - by Sofiane Merah
    I have such a hard time picturing this. I just don't have the brain to do it. I have a table called reports. --------------------------------------------- | report_id | set_of_bads | field1 | field2 | --------------------------------------------- | 123 | set1 | qwe | qwe | --------------------------------------------- | 321 | 123112 | ewq | ewq | --------------------------------------------- I have another table called bads. This table contains a list of bad data. ------------------------------------- | bad_id | set_it_belongs_to | field2 | field3 | ------------------------------------- | 1 | set1 | qwe | qwe | ------------------------------------- | 2 | set1 | qee | tte | ------------------------------------- | 3 | set1 | q44w | 3qwe | ------------------------------------- | 4 | 234 | qoow | 3qwe | ------------------------------------- Now I have set the first field of every table as the primary key. My question is, how do I connect the field set_of_bads to set_it_belongs_to in the bads table. This way if I want to get the entire set of data that is set1 by calling on the reports table I can do it. Example: hey reports table.. bring up the row that has the report_id 123. Okay thank you.. Now get all the rows from bads that has the set_of_bads value from the row with the report_id 123. Thanks.

    Read the article

  • How To: Group Column Headings In WPF.

    - by VoidDweller
    -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? A working prototype would be even better! Envision this nicely styled. :-)

    Read the article

  • Finding Common Phrases in SQL Server TEXT Column

    - by regex
    Short Desc: I'm curious to see if I can use SQL Analysis services or some other SQL Server service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification.

    Read the article

  • Core Data + Core Animation/CALayer together??

    - by ivanTheTerrible
    I am making an Cocoa app with custom interfaces. So far I have implemented one version of the app using CALayer doing the rendering, which has been great given the hierarchical structure of CALayers, and its [hitTest:] function for handling mouse events. In this early version, the model of the app are my custom classes. However, as the program grows I feel the urge of using Core Data for the model, not just for the ease of binding/undo management, but also want to try out the new technology. My method so far: In Core Data: creating a Block entity, with attributes xPos, yPos, width, height...etc. Then, creating a BlockView : CALayer class for drawing, which uses methods such as self.position.x = [self valueForKey:@"xPos"] to fetch the values from the model. In this case, every BlockView object has to also keep a local copy of xPos, which is NOT good. Do any of you guys have better suggestions? Edit: This app is a information visualization tool. So the positions, dimensions of the blocks are important, and should be persisted for later analysis.

    Read the article

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Let multiple highcharts charts appear automatically from mysql data

    - by martini1993
    I have the following problem. I want to make multiple Highcharts webcharts appear automatically based on the data from the database. Let's say we have the following database: ___________________________________________________________________ | | | | | | | | Year | Month | ID | Name User | Wins | Losses | |_______|___________|______|_______________|____________|__________| | 2013 1 21 Tony Stark 3 12 | | 2013 1 52 Bruce Wayne 5 4 | | 2013 1 76 Clark Kent 9 5 | |__________________________________________________________________| (This database is an example, there are a lot more rows in the real database.) And i have the following query: SELECT a.year AS year1, a.month AS month1, a.id AS id, a.name AS nameuser, a.wins AS wins, a.losses AS losses FROM Sales a WHERE a.month = 1 AND a.year = YEAR(NOW()) With this, it is very easy to hardcode a chart with Highcharts. But what I want is that there has to be a webchart per user. So instead of a single webchart with all the users in it, I want multiple charts next to each other based on the data from the database. So instead of this: http://jsfiddle.net/CWSb6/ I want this (But then next to each other): http://jsfiddle.net/DReMD/ It has to be generated automatically with php and mysql. So if there is a new user starting this month, and the new user is saved in the database, the page automatically displays the new user with the related web chart. I find this very hard to accomplish and I need some help to get to the right direction for the solution. Many thanks in advance! (Sorry for my bad english.)

    Read the article

  • Modifying existing object attributes in Core Data after the fact

    - by glorifiedHacker
    In a previous question, I was looking for an alternative to modifying how "no date" was being stored in the date attribute of my NSManagedObject subclass. Previously, I had assigned nil to that attribute when a user didn't assign a date. In order to address sorting issues when using NSFetchedResultsController, I have decided to assign [NSDate distantFuture] to the date attribute when a user doesn't assign a date. However, given that this app is already in the wild, I need to update the Core Data store such that any existing nil date values are changed to [NSDate distantFuture]. What is the best way to make this change? The first thing that comes to mind is to iterate through all of the objects in the store in an array and change any nil values that are found. This could be limited to a one-time event by checking against a user defaults key that indicates whether this upgrade has been performed. Is there a way that I can do this with Core Data versioning instead? Or another method that doesn't involve me writing throw-away code?

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Filter large amounts of data in a table w/ jQuery

    - by Bry4n
    I work for a transit agency and I have large amounts of data (mostly times), and I need a way to filter the data using two textboxes (To and From). I found jQuery quick search, but it seems to only work with one textbox. If anyone has any ideas via jQuery or some other client side library, that would be fantastic. Ideal example: To: [Textbox] From:[Textbox] <table> <tr> <td>69th street</td><td>5:00pm</td><td>5:06pm</td><td>5:10pm</td><td>5:20pm</td> </tr> <tr> <td>Millbourne</td><td>5:09pm</td><td>5:15pm</td><td>5:20pm</td><td>5:25pm</td> </tr> <tr> <td>Spring Garden</td><td>6:00pm</td><td>6:15pm</td><td>6:20pm</td><td>6:25pm</td> </tr> </table> So If I start typing in one of the stations in the To: textbox it either displays dynamically like the quick search or i have to press a button (either or) and then in the from: textbox. Lastly it shows me to: station and all its times on the left and the from: station and all its times on the right.

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • Parsing Chunk of Data into Hash of Array With Perl

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >