Search Results

Search found 12511 results on 501 pages for 'itunes store'.

Page 177/501 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • SQL 2008: Using separate tables for each datatype to return single row

    - by Thomas C
    Hi all I thought I'd be flexible this time around and let the users decide what contact information the wish to store in their database. In theory it would look as a single row containing, for instance; name, adress, zipcode, Category X, Listitems A. Example FieldType table defining the datatypes available to a user: FieldTypeID, FieldTypeName, TableName 1,"Integer","tblContactInt" 2,"String50","tblContactStr50" ... A user the define his fields in the FieldDefinition table: FieldDefinitionID, FieldTypeID, FieldDefinitionName 11,2,"Name" 12,2,"Adress" 13,1,"Age" Finally we store the actual contact data in separate tables depending on its datatype. Master table, only contains the ContactID tblContact: ContactID 21 22 tblContactStr50: ContactStr50ID,ContactID,FieldDefinitionID,ContactStr50Value 31,21,11,"Person A" 32,21,12,"Adress of person A" 33,22,11,"Person B" tblContactInt: ContactIntID,ContactID,FieldDefinitionID,ContactIntValue 41,22,13,27 Question: Is it possible to return the content of these tables in two rows like this: ContactID,Name,Adress,Age 21,"Person A","Adress of person A",NULL 22,"Person B",NULL,27 I have looked into using the COALESCE and Temp tables, wondering if this is at all possible. Even if it is: maybe I'm only adding complexity whilst sacrificing performance for benefit in datastorage and user definition option. What do you think? Best Regards /Thomas C

    Read the article

  • OutOfMemoryException

    - by Andrew
    I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays. I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it. I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now. I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException. This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object. Is my thinking correct here?

    Read the article

  • Simple CalendarStore query puts application into infinite loop!?

    - by Frank R.
    Hi, I've been looking at adding iCal support to my new application and everything seemed just fine and worked on my Mac OS X 10.6 Snow Leopard development machine without a hitch. Now it looks like depending on what is in your calendar the very simple query below: - (NSArray*) fetchCalendarEventsForNext50Minutes { NSLog(@"fetchCalendarEventsForNext50Minutes"); NSTimeInterval start = [NSDate timeIntervalSinceReferenceDate]; NSDate* startDate = [[NSDate alloc] init]; NSDate* endDate = [startDate addTimeInterval: 50.0 * 60.0]; NSPredicate *eventsForTheNext50Minutes = [CalCalendarStore eventPredicateWithStartDate:startDate endDate:endDate calendars:[[CalCalendarStore defaultCalendarStore] calendars]]; // Fetch all events for this year NSArray *events = [[CalCalendarStore defaultCalendarStore] eventsWithPredicate: eventsForTheNext50Minutes]; NSLog( @"fetch took: %f seconds", [NSDate timeIntervalSinceReferenceDate] - start ); return events; } produces a beachball thrash even with quite limited events in the calendar store. Am I missing something crucial here? The code snippet is pretty much exactly from the documentation at: // Create a predicate to fetch all events for this year NSInteger year = [[NSCalendarDate date] yearOfCommonEra]; NSDate *startDate = [[NSCalendarDate dateWithYear:year month:1 day:1 hour:0 minute:0 second:0 timeZone:nil] retain]; NSDate *endDate = [[NSCalendarDate dateWithYear:year month:12 day:31 hour:23 minute:59 second:59 timeZone:nil] retain]; NSPredicate *eventsForThisYear = [CalCalendarStore eventPredicateWithStartDate:startDate endDate:endDate calendars:[[CalCalendarStore defaultCalendarStore] calendars]]; // Fetch all events for this year NSArray *events = [[CalCalendarStore defaultCalendarStore] eventsWithPredicate:eventsForThisYear]; It looks like it has something to do with the recurrence rules, but as far as I can see there are no other ways of fetching events from the calendar store anyway. Has anybody else come across this? Best regards, Frank

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • QueryReadStore loads JSON into DataGrid, but JsonRestStore does not (from the same source)

    - by labratmatt
    I'm building a Dojo DataGrid from JSON data provided by my REST interface. The DataGrid loads the data fine using a QueryReadStore, but doesn't seem to work with the same same data piped into a JsonRestStore. I'm using the following Dojo libs with Dojo 1.4.1: dojo.require("dojox.data.JsonRestStore"); dojo.require("dojox.grid.DataGrid"); dojo.require("dojox.data.QueryReadStore"); dojo.require("dojo.parser"); I declare my stores in the following manner: var storeJRS = new dojox.data.JsonRestStore({target:"api/collaborations.php/1", idAttribute: 'items[].id'}); var storeQRS = new dojox.data.QueryReadStore({url:"api/collaborations.php/1", requestMethod:"get"}); I create my grid layout like this: var gridLayout = [ new dojox.grid.cells.RowIndex({ name: "Row #", width: 5, styles: "text-align: left;" }), { name: "Name", field: "name", styles: "text-align:right;", width:20 }, { name: "Description", field: "description", width:30 } ]; I create my DataGrid as follows: The above works, but if I use QueryReadStore as my store, the grid is created with the headers (Name, Description), but it isn't populated with any rows: <div dojoType="dojox.grid.DataGrid" jsid="grid3" store="storeQRS" structure="gridLayout" style="height:500px; width:1000px;"></div> Using FireBug, I can see that QueryReadStore is getting my JSON data from my REST interface. It looks like the following: {"numRows":6,"items":[{"name":"My Super Cool Collab","description":"This is for all the super cool people in the super cool group","id":1},{"name":"My Other Super Cool","description":"This is for all the other super cool people","id":3},{"name":"This is another coll","description":"This is just some other collab","id":4},{"name":"some new collab","description":"this is a new collab","id":5},{"name":"yet another new coll","description":"uh huh","id":6},{"name":"asdf","description":"asdf","id":7}]} Any ideas? Thanks.

    Read the article

  • How to architect Rails site that can be edited while running?

    - by Chris Kimpton
    Hi, I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting. But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site. The actual scraping thread happens async to the website, using the daemons gem. My requirements are: Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins. Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem. Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead. I am thinking my options are: Store the code in the db (with a history table for the old versions) Store the code in a private git repo somewhere and access that for the history/latest versions. I am thinking the git route might be easiest, given its raison d'etre is to track file history... But perhaps there is a gem/plugin that does all this for me, out of the box? Thanks in advance for any tips/advice. ~chris

    Read the article

  • How to create Dictionary object in AppDelegate to access objectkey in another ViewController

    - by TechFusion
    Hello, I have created Window Based application and tab bar controller as root controller. My objective is to store Text Field data values in one tab bar VC and will be accessible and editable by other VC and also retrievable when application start. I am looking to use NSDictionary class in AppDelegate so that I can access stroed Data Values with keys. //TestAppDelegate.h @interface { ..... .... NSDictionary *Data; } @property (nonatomic, retain) NSDictionary *Data; //TestAppDelegate.m #import "TestAppDelegate.h" NSString *kNamekey =@"Namekey"; NSString *kUserIDkey =@"UserIDkey"; NSString *kPasswordkey =@"Passwordkey"; @implemetation TestAppDelegate @synthesize Data; -(void)applicationDidFinishLaunching:(UIApplication)application { NSString *NameDefault; NSString *UserIDdefault; NSString *Passworddefault; NSString *testvalue = [[NSUserDefaults standardUserDefaults] stringForKey: kNamekey]; if(testvalue == nil) { NSDictionary *appDefaults = [NSDictionary dictionaryWithObjectsAndKeys: NameDefault, kNamekey, UserIdefault kUserIDkey, Passwordefault kPasswordkey, nil]; self.Data = appDefaults; [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults]; [[NSUserDefaults standardUserDefaults] synchronize]; } else { } } Here ViewController is ViewController of Navigation Controller which is attached with one tab bar.. I have attached xib file with ViewController //ViewController.h @interface IBOutlet UITextField *Name; IBOutlet UITextField *UserId; IBOutlet UITextField *Password; } @property(retain,nonatomic) IBOutlet UITextField *Name @property(retain,nonatomic) IBOutlet UITextField *UserId; @property(retain,nonatomic) IBOutlet UITextField *Password; -(IBAction)Save:(id)sender; @end Here in ViewController.m I want to store Text Field input data with key which are defined in AppDelegate. I have linked Save Barbutton with action. Here I am not getting how to access NSDictionary class with object and keys in ViewController. //ViewController.m -(IBAction)Save:(id)sender{ TestAppDelegate *appDelegate = (TestAppDelegate*)[[UIApplication sharedApplication] delegate]; //Here how to access knamekey here to set object(Name.test) value with key } Please guide me about this. Thanks,

    Read the article

  • Databinding combobox selected item to settings

    - by Tuukka
    I store user specified settings using application settings properties and databinding. It has been working fine, until i want user selected to font for combobox. Databinding between user settings and combobox not working. I want to store font family name. App.XML <Application.Resources> <ResourceDictionary> <properties:Settings x:Key="Settings" /> </ResourceDictionary> </Application.Resources> Window.XML <ComboBox Name="Families" ItemsSource="{x:Static Fonts.SystemFontFamilies}" <!-- This line --> SelectedItem="{Binding Source={StaticResource Settings}, Path=Default.Font, Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}" Margin="57,122,199,118"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding}" FontFamily="{Binding}"/> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> Settings: font String User Arial

    Read the article

  • storing/retrieving data for graph with long continuous stretches

    - by james
    i have a large 2-dimensional data set which i would like to graph. the graph is displayed in a browser and the data is retrieved via ajax. long stretches of this graph will be continuous - e.g., for x=0 through x=1000, y=9, then for x=1001 through x=1100, y=80, etc. the approach i'm considering is to send (from the server) and store (in the browser) only the points where the data changes. so for the example above, i would say data[0] = 9, then data[1001] = 80. then given x=999 for example, retrieving data[999] would actually look up data[0]. the problem that arises is finding a dictionary-like data structure which behaves like this. the approach i'm considering is to store the data in a traditional dictionary object, then also maintain a sorted array of key for that object. when given x=999, it would look at the mid-point of this array, determine whether the nearest lower key is left or right of that midpoint, then repeat with the correct subsection, etc.. does anyone have thoughts on this problem/approach?

    Read the article

  • Autologin for web application

    - by Maulin
    We want to AutoLogin feature to allow user directly login using link into our Web Application. What is the best way achieve this? We have following approches in our mind. 1) Store user credentials(username/password) in cookie. Send cookie for authentication. e.g. http: //www.mysite.com/AutoLogin (here username/password will be passed in cookie) OR Pass user credentials in link URL. http: //www.mysite.com/AutoLogin?userid=<&password=< 2) Generate randon token and store user random token and user IP on server side database. When user login using link, validate token and user IP on server. e.g. http: //www.mysite.com/AutoLogin?token=< The problem with 1st approach is if hacker copies link/cookie from user machine to another machine he can login. The problem with 2nd approach is the user ip will be same for all users of same organization behind proxy. Which one is better from above from security perspective? If there is better solution which is other than mentioned above, please let us know.

    Read the article

  • Ruby - encrypted_strings

    - by Tom Andersen
    A bit of a Ruby newbie here - should be an easy question: I want to use the encrypted_strings gem to create a password encrypted string: (from http://rdoc.info/projects/pluginaweek/encrypted_strings) Question is: Everything works fine, but how come I don't need the password to decrypt the string? Say I want to store the string somewhere for a while,like the session. Is the password also stored with it? (which would seem very strange?). And no, I'm not planning on using 'secret-key' or any similar hack as a password. I am planning on dynamically generating a class variable @@password using a uuid, which I don't store other than in memory, and can change from one running of the program to the next. Symmetric: >> password = 'shhhh' => "shhhh" >> crypted_password = password.encrypt(:symmetric, :password => 'secret_key') => "qSg8vOo6QfU=\n" >> crypted_password.class => String >> crypted_password == 'shhhh' => true >> password = crypted_password.decrypt => "shhhh"

    Read the article

  • PHP OOP: Providing Domain Entities with "Identity"

    - by sunwukung
    Bit of an abstract problem here. I'm experimenting with the Domain Model pattern, and barring my other tussles with dependencies - I need some advice on generating Identity for use in an Identity Map. In most examples for the Data Mapper pattern I've seen (including the one outlined in this book: http://apress.com/book/view/9781590599099) - the user appears to manually set the identity for a given Domain Object using a setter: $UserMapper = new UserMapper; //returns a fully formed user object from record sets $User = $UserMapper->find(1); //returns an empty object with appropriate properties for completion $UserBlank = $UserMapper->get(); $UserBlank->setId(); $UserBlank->setOtherProperties(); Now, I don't know if I'm reading the examples wrong - but in the first $User object, the $id property is retrieved from the data store (I'm assuming $id represents a row id). In the latter case, however, how can you set the $id for an object if it has not yet acquired one from the data store? The problem is generating a valid "identity" for the object so that it can be maintained via an Identity Map - so generating an arbitrary integer doesn't solve it. My current thinking is to nominate different fields for identity (i.e. email) and demanding their presence in generating blank Domain Objects. Alternatively, demanding all objects be fully formed, and using all properties as their identity...hardly efficient. (Or alternatively, dump the Domain Model concept and return to DBAL/DAO/Transaction Scripts...which is seeming increasingly elegant compared to the ORM implementations I've seen...)

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • Can't connect to HTTPS using X509 client certificate

    - by wows
    Hi - I'm new to cryptography and I'm a bit stuck: I'm trying to connect (from my development environment) to a web service using HTTPS. The web service requires a client certificate - which I think I've installed correctly. They have supplied me with a .PFX file. In Windows 7, I double clicked the file to install it into my Current User - Personal certificate store. I then exported a X509 Base-64 encoded .cer file from the certificate entry in the store. It didn't have a private key associate with it. Then, in my app, I'm attempting to connect to the service like this: var certificate = X509Certificate.CreateFromCertFile("xyz.cer")); var serviceUrl = "https://xyz"; var request = (HttpWebRequest) WebRequest.Create(serviceUrl); request.ClientCertificates.Add(certificate); request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/x-www-form-urlencoded"; I get a 502 Connection failed when I connect. Is there anything you can see wrong with this method? Our production environment seems to work with a similar configuration, but it's running Windows Server 2003. Thanks!

    Read the article

  • Converting non-delimited text into name/value pairs in Delphi

    - by robsoft
    I've got a text file that arrives at my application as many lines of the following form: <row amount="192.00" store="10" transaction_date="2009-10-22T12:08:49.640" comp_name="blah " comp_ref="C65551253E7A4589A54D7CCD468D8AFA" name="Accrington "/> and I'd like to turn this 'row' into a series of name/value pairs in a given TStringList (there could be dozens of these <row>s in the file, so eventually I will want to iterate through the file breaking each row into name/value pairs in turn). The problem I've got is that the data isn't obviously delimited (technically, I suppose it's space delimited). Now if it wasn't for the fact that some of the values contain leading or trailing spaces, I could probably make a few reasonable assumptions and code something to break a row up based on spaces. But as the values themselves may or may not contain spaces, I don't see an obvious way to do this. Delphi' TStringList.CommaText doesn't help, and I've tried playing around with Delimiter but I get caught-out by the spaces inside the values each time. Does anyone have a clever Delphi technique for turning the sample above into something resembling this? ; amount="192.00" store="10" transaction_date="2009-10-22T12:08:49.640" comp_name="blah " comp_ref="C65551253E7A4589A54D7CCD468D8AFA" name="Accrington " Unfortunately, as is usually the case with this kind of thing, I don't have any control over the format of the data to begin with - I can't go back and 'make' it comma delimited at source, for instance. Although I guess I could probably write some code to turn it into comma delimited - would rather find a nice way to work with what I have though. This would be in Delphi 2007, if it makes any difference.

    Read the article

  • How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: class Array def shuffle ret = dup j = length i = 0 while j > 1 r = i + rand(j) ret[i], ret[r] = ret[r], ret[i] i += 1 j -= 1 end ret end end (0..9).to_a.shuffle.each{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle is used to efficiently provide random ordering. My problem is that shuffle needs to operate on an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. Imagine replacing (0..9) with (0..99**99). This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do? [Edit1]: Why do I want to do this? I'm trying to exhaust the search space of a hash algorithm for a N-length input string looking for partial collisions. Each number I generate is equivalent to a unique input string, entropy and all. Basically, I'm "counting" using a custom alphabet. [Edit2]: This means that f(x) in the above examples is a method that generates a hash and compares it to a constant, target hash for partial collisions. I do not need to store the value of x after I call f(x) so memory should remain constant over time. [Edit3/4/5/6]: Further clarification/fixes.

    Read the article

  • CouchDB, HDFS, HBase or which is right for my situation?

    - by Lucas
    Hello all, This question is regarding data storage systems such as CouchDB, HDFS and HBase, specifically, which is right. I am looking at making a simple and customized Document Management System for my organization. Basically, we need the ability to store some Word Documents, PDFs and other similar files. I also want to store metadata about these files (e.g., Author, Dates, etc). Usage permissions would also be handy, but that can probably be built using meta-data. I would also need the ability to full-text index. The ability to version, while not required would be extremely useful. I would like the ability to simply add hardware to expand the resources of the system and the system must support Network Attached Storage over the CIFS or NFS protocol(s). I have read about CouchDB, HDFS and HBase. My preferred programming language is C# as all of my end-users will be running Windows machines and I will want to make both web and winforms client implementations. My question is which solution best fits my needs? Based on my research it appears that CouchDB (utilizing the CouchDB-Lounge and CouchDB-Lucene) perfectly fits my needs. However, I am worried that since I have worked with CouchDB that I might be overlooking something useful for my needs in HDFS or HBase or something similar due to a bias. Any and all opinions are welcome as I am looking for the community input as I really do not want to make the wrong choice at the start of my project. Please ask if you need more information. I thank you all for your time, input and assistance.

    Read the article

  • Getting a users Facebook profile url

    - by Greg Pabst
    I am creating a registry site so similar people can find each other easily. I don't want to use Facebook Connect as the primary log in method or use Facebook to store their information. I'll be creating a database on my end to store that info. For security reasons I won't be displaying the users address, phone number or email address so I wanted to provide the next best way for people to connect with each other, this is where Facebook comes in. Normally I would just ask them to type their Facebook URL in a text box but I don't think most people know what their url is which is why I think I need to use Facebook Connect. So here is my idea..when the users signs up there is a check box that when checked signifies they are allowing people to find them on Facebook. I assume once they click the register button that a Facebook Connect popup will show up asking for permission to access their Facebook account. When they "allow" it, then I can get their profile url. All I need is their Facebook profile url, I don't want any other Facebook features or information. Is Facebook Connect the best thing to use for this scenario? Is there an easier way? Several months ago on the Facebook Connect site their used to be examples of doing this, but all the documentation has been rearranged and changed and I can't seem to find the information. Any help you can provide would be great!

    Read the article

  • How to create database records for a financial year

    - by David A Gibson
    Hello, I need to create entities (specifically contracts) in a database table that are associated with a financial year. These contracts will be applied to projects. Any contract variation will be recorded by creating a new contract record for the same financial year but the original will remain associated with the project as a history. The projects can last several years and so at any one time a project will have a live contract record for each year as well as any number of historic contracts for that year. All of which is incidental but I'm trying to provide some context. If the contracts where for the year - it would be easy and I'd just store the Year either as a date field with the 1st of January or just an Integer. However the contracts run for financial years and I don't know how to approach this. I don't want a separate table containing the financial years as I don't want the users to have to maintain this. I don't want to store the financial year as a string "2009/2010" as this is not ideal for sorting/extracting the data. Any ideas will be helpful, my best so far is to have starting and ending year in 2 columns and just "KNOW" that starting is April of the year etc Thanks

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • How to scan an array for certain information

    - by Andrew Martin
    I've been doing an MSc Software Development conversion course, the main language of which is Java, since the end of September. We have our first assessed practical coming and I was hoping for some guidance. We have to create an array that will store 100 integers (all of which are between 1 and 10), which are generated by a random number generator, and then print out ten numbers of this array per line. For the second part, we need to scan these integers, count up how often each number appears and store the results in a second array. I've done the first bit okay, but I'm confused about how to do the second. I have been looking through the scanner class to see if it has any methods which I could use, but I don't see any. Could anyone point me in the right direction - not the answer, but perhaps which library it comes from? Code so far: import java.util.Random; public class Practical4_Assessed { public static void main(String[] args) { Random numberGenerator = new Random (); int[] arrayOfGenerator = new int[100]; for (int countOfGenerator = 0; countOfGenerator < 100; countOfGenerator++) arrayOfGenerator[countOfGenerator] = numberGenerator.nextInt(10); int countOfNumbersOnLine = 0; for (int countOfOutput = 0; countOfOutput < 100; countOfOutput++) { if (countOfNumbersOnLine == 10) { System.out.println(""); countOfNumbersOnLine = 0; countOfOutput--; } else { System.out.print(arrayOfGenerator[countOfOutput] + " "); countOfNumbersOnLine++; } } } } Thanks, Andrew

    Read the article

  • How to html encode the output of an MVC view?

    - by jessegavin
    I am building a web app which will generate lots of different code templates (HTML tables, XML documents, SQL scripts) that I want to render on screen as encoded HTML so that people can copy and paste those templates. I would like to be able to use asp.net mvc views to generate the code for these templates (rather than, say, using a StringBuilder). Is there a way using asp.net mvc to store the results of a rendered view into a string? Something like the following perhaps? public ContentResult HtmlTable(string format) { var m = new MyViewModel(); m.MyDataElements = _myDataRepo.GetData(); // Somehow render the view and store it as a string? // Not sure how to achieve this part. var viewHtml = View(m); var htmlEncodedView = Server.HtmlEncode(viewHtml); return Content(htmlEncodedView); } NOTE: My original question mentioned NHaml views specifically, but I realized that this wasn't view engine specific. So if you see answers related to NHaml, that's why.

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • Google App Engine - Uploading blobs and authentication

    - by Keyur
    (I tried asking this on the GAE forums but didn't get an answer so am trying it here.) Currently to upload blobs, the app engine's blob store service creates a unique one- time URL that a user can post blobs to. My requirement is that I only want authenticated / authorized users to post blobs in my application. I can achieve this currently if the page that includes the multipart form to upload blobs is in my application. However, I am looking to providing a "REST API" for my users to upload their blobs. While it is true that the one-time nature of the upload URL mitigates the chances of rogue use but it's still possible. I was wondering if there is anyone on the app engine team here that can consider a feature where developers can register an upload listener. (Or if there is already a way, I'll be all ears). A standard servlet filter could also potentially do the job. This will give us an opportunity to authenticate / validate / decorate requests before the request gets forwarded to the blob store service. Thanks, Keyur

    Read the article

  • Why is a CoreData forceFetch required after a delete on the iPad but not the iPhone?

    - by alyoshak
    When the following code is run on the iPhone the count of fetched objects after the delete is one less than before the delete. But on the iPad the count remains the same. This inconsistency was causing a crash on the iPad because elsewhere in the code, soon after the delete, fetchedObjects is called and the calling code, trusting the count, attempts access to the just-deleted object's properties, resulting in a NSObjectInaccessibleException error (see below). A fix has been to use that commented-out call to performFetch, which when executed makes the second call to fetchObjects yield the same result as on the iPhone without it. My question is: Why is the iPad producing different results than the iPhone? This is the second of these differences that I've discovered and posted recently. -(NSError*)deleteObject:(NSManagedObject*)mo; { NSLog(@"\n\nNum objects in store before delete: %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); [self.managedObjectContext deleteObject:mo]; // Save the context. NSError *error = nil; if (![self.managedObjectContext save:&error]) { } // [self.fetchedResultsController performFetch:&error]; // force a fetch NSLog(@"\n\nNum objects in store after delete (and save): %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); return error; } (The full NSObjectInaccessibleException is: "Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x1dcf90 <x-coredata://DC02B10D-555A-4AB8-8BC4-F419C4982794/Blueprint/p"

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >