Search Results

Search found 69140 results on 2766 pages for 'design time'.

Page 622/2766 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • Which Computer Organization & Architecture book is good for me?

    - by claws
    I'm always interested in learning the inner working of things. I started with C programming and then learnt Operating systems (from stallings) and then linkers & loaders and then assembly language after reading these now I want to go into little more depth. Computer Architecture. I feel that makes everything clear. As per SO archives these are the two good books: Computer Architecture: A Quantitative Approach, 4th Edition Computer Organization and Design, Fourth Edition, ~ David A. Patterson, John L. Hennessy But I've browsed through the contents of these books and found that they don't exactly meet my needs. I want to learn more about caches, Memory Management Unit , mapping b/w virtual memory & physical memory I'm no way interested in other ISAs like MIPS etc.. I'm IA32 and X86-64 fan and I want to stick to it. I'm not a hardware developer I don't want to details like circuit diagrams or How is L1, L2 & L3 caches are implemented? I want to know the parallel processing technologies like HyperThreading at the architecture level but again I don't want to design them. I liked the table of Contents of - Computer Architecture: A Quantitative Approach, 4th Edition but Quantitave Approach? Seriously?? I want to know the details of current technologies and I dont want to spend reading 200 pages of outdated old technologies ( I experienced this while learning ASM}

    Read the article

  • How to maintain a job history using Quartz scheduler

    - by rwwilden
    I'd like to maintain a history of jobs that were scheduled by a Quartz scheduler containing the following properties: 'start time', 'end time', 'success', 'error'. There are two interfaces available for this: ITriggerListener and IJobListener (I'm using the C# naming convention for interfaces because I'm using Quartz.NET but the same question could be asked for the Java version). IJobListener has a JobToBeExecuted and a JobWasExecuted method. The latter provides a JobExecutionException so that you know when something went wrong. However, there is no way to correlate JobToBeExecuted and JobWasExecuted. Suppose my job runs for ten minutes. I start it at t0 and t0+2 (so they overlap). I get two calls to JobToBeExecuted and insert two start times into my history table. When both jobs finish at t1 and t1+2 I get two calls to JobWasExecuted. How do I know what database record to update in each call (to store an end time with its corresponding start time)? ITriggerListener has another problem. There is no way to get any errors inside the TriggerComplete method when a job failed. How do I get the desired behavior?

    Read the article

  • C++ JSON parser

    - by pollux
    Dear reader, I'm working on a twitter client which uses the twitter streaming json api. Twitter advices JSON as XML version is deprecated. I'm looking for a good JSON parser which can parse the json data below. I'm receiving this JSON which I want to be able to read/parse using a JSON parser. { "in_reply_to_status_id": null, "text": "Home-plate umpire Crawford gets stung http://tinyurl.com/27ujc86", "favorited": false, "coordinates": null, "in_reply_to_user_id": null, "source": "<a href=\"http://apiwiki.twitter.com/\" rel=\"nofollow\">API</a>", "geo": null, "created_at": "Fri Jun 18 15:12:06 +0000 2010", "place": null, "user": { "profile_text_color": "333333", "screen_name": "HostingViral", "time_zone": "Pacific Time (US & Canada)", "url": "http://bit.ly/1Way7P", "profile_link_color": "228235", "profile_background_image_url": "http://s.twimg.com/a/1276654401/images/themes/theme14/bg.gif", "description": "Full time Internet Marketer - Helping other reach their Goals\r\nhttp://wavemarker.com", "statuses_count": 1944, "profile_sidebar_fill_color": "c7b7c7", "profile_background_tile": true, "contributors_enabled": false, "lang": "en", "notifications": null, "created_at": "Wed Dec 30 07:50:52 +0000 2009", "profile_sidebar_border_color": "120412", "following": null, "geo_enabled": false, "followers_count": 2485, "protected": false, "friends_count": 2495, "location": "Working at Home", "name": "Johnathan Thomas", "verified": false, "profile_background_color": "131516", "profile_image_url": "http://a1.twimg.com/profile_images/600114776/nessykalvo421_normal.jpg", "id": 100439873, "utc_offset": -28800, "favourites_count": 0 }, "in_reply_to_screen_name": null, "id": 16477056501, "contributors": null, "truncated": false } *This is the raw string (above it beautified) * {"in_reply_to_status_id":null,"text":"Home-plate umpire Crawford gets stung http://tinyurl.com/27ujc86","favorited":false,"coordinates":null,"in_reply_to_user_id":null,"source":"<a href=\"http://apiwiki.twitter.com/\" rel=\"nofollow\">API</a>","geo":null,"created_at":"Fri Jun 18 15:12:06 +0000 2010","place":null,"user":{"profile_text_color":"333333","screen_name":"HostingViral","time_zone":"Pacific Time (US & Canada)","url":"http://bit.ly/1Way7P","profile_link_color":"228235","profile_background_image_url":"http://s.twimg.com/a/1276654401/images/themes/theme14/bg.gif","description":"Full time Internet Marketer - Helping other reach their Goals\r\nhttp://wavemarker.com","statuses_count":1944,"profile_sidebar_fill_color":"c7b7c7","profile_background_tile":true,"contributors_enabled":false,"lang":"en","notifications":null,"created_at":"Wed Dec 30 07:50:52 +0000 2009","profile_sidebar_border_color":"120412","following":null,"geo_enabled":false,"followers_count":2485,"protected":false,"friends_count":2495,"location":"Working at Home","name":"Johnathan Thomas","verified":false,"profile_background_color":"131516","profile_image_url":"http://a1.twimg.com/profile_images/600114776/nessykalvo421_normal.jpg","id":100439873,"utc_offset":-28800,"favourites_count":0},"in_reply_to_screen_name":null,"id":16477056501,"contributors":null,"truncated":false} I've tried multiple JSON parsers from json.org though I've tried 4 now and can't find one which can parse above json. Kind regards, Pollux

    Read the article

  • Why is "Fixup" needed for Persistence Ignorant POCO's in EF 4?

    - by Eric J.
    One of the much-anticipated features of Entity Framework 4 is the ability to use POCO (Plain Old CLR Objects) in a Persistence Ignorant manner (i.e. they don't "know" that they are being persisted with Entity Framework vs. some other mechanism). I'm trying to wrap my head around why it's necessary to perform association fixups and use FixupCollection in my "plain" business object. That requirement seems to imply that the business object can't be completely ignorant of the persistence mechanism after all (in fact the word "fixup" sounds like something needs to be fixed/altered to work with the chosen persistence mechanism). Specifically I'm referring to the Association Fixup region that's generated by the ADO.NET POCO Entity Generator, e.g.: #region Association Fixup private void FixupImportFile(ImportFile previousValue) { if (previousValue != null && previousValue.Participants.Contains(this)) { previousValue.Participants.Remove(this); } if (ImportFile != null) { if (!ImportFile.Participants.Contains(this)) { ImportFile.Participants.Add(this); } if (ImportFileId != ImportFile.Id) { ImportFileId = ImportFile.Id; } } } #endregion as well as the use of FixupCollection. Other common persistence-ignorant ORMs don't have similar restrictions. Is this due to fundamental design decisions in EF? Is some level of non-ignorance here to stay even in later versions of EF? Is there a clever way to hide this persistence dependency from the POCO developer? How does this work out in practice, end-to-end? For example, I understand support was only recently added for ObservableCollection (which is needed for Silverlight and WPF). Are there gotchas in other software layers from the design requirements of EF-compatible POCO objects?

    Read the article

  • Natural language processing - Ideas for beginner's projects

    - by Microkernel
    Hi guys, I am a beginner in NLP and NLTK. I am very interested in NLP and hence joined a weekend course on AI in some local institution, which requires me to do a project for completion of the course, and I decided to do it in NLP. The problem is,the instructor is not good at all for this course (According to me she is just a charlatan) (or may not be very interested in teaching as this is her last batch here after which the institute is going to send her out). So I am stuck in a situation where where I got to finish this project in a month to one and half months period, but as a naive person in the field I am feeling it very difficult to comprehend the things required to decide on project. (Also, as I am working full time, I am not finding enough time to dedicate on this). I considered using NLTK toolkit in python for the project for following reasons. (1) Python is famous for ease of use, rapid prototyping and very active community (considering very short span of time I have, and as I am a C programmer by profession, I need a language that I can learn fast and is simple to use). (2) NLTk has good review, and extensive documentation and a very active community. So the problem is what project should I take up, so that I can learn something and will be able to finish project in time. (I know almost nothing in NLP, don't even know what exactly corpora is... :( ) So, please suggest me some topics that I should consider for the project. Regards, MicroKernel :)

    Read the article

  • How should I handle persistence in a Java MUD? OptimisticLockException handling

    - by Chase
    I'm re-implementing a old BBS MUD game in Java with permission from the original developers. Currently I'm using Java EE 6 with EJB Session facades for the game logic and JPA for the persistence. A big reason I picked session beans is JTA. I'm more experienced with web apps in which if you get an OptimisticLockException you just catch it and tell the user their data is stale and they need to re-apply/re-submit. Responding with "try again" all the time in a multi-user game would make for a horrible experience. Given that I'd expect several people to be targeting a single monster during a fight I think the chance of an OptimisticLockException would be high. My view code, the part presenting a telnet CLI, is the EJB client. Should I be catching the PersistenceExceptions and TransactionRolledbackLocalExceptions and just retrying? How do you decide when to stop? Should I switch to pessimistic locking? Is persisting after every user command overkill? Should I be loading the entire world in RAM and dumping the state every couple of minutes? Do I make my session facade a EJB 3.1 singleton which would function as a choke point and therefore eliminating the need to do any type of JPA locking? EJB 3.1 singletons function as a multiple reader/single writer design (you annotate the methods as readers and writers). Basically, what is the best design and java persistence API for highly concurrent data changes in an application where it is not acceptable to present resubmit/retry prompts to the user?

    Read the article

  • Prepopulating inlines based on the parent model in the Django Admin

    - by Alasdair
    I have two models, Event and Series, where each Event belongs to a Series. Most of the time, an Event's start_time is the same as its Series' default_time. Here's a stripped down version of the models. #models.py class Series(models.Model): name = models.CharField(max_length=50) default_time = models.TimeField() class Event(models.Model): name = models.CharField(max_length=50) date = models.DateField() start_time = models.TimeField() series = models.ForeignKey(Series) I use inlines in the admin application, so that I can edit all the Events for a Series at once. If a series has already been created, I want to prepopulate the start_time for each inline Event with the Series' default_time. So far, I have created a model admin form for Event, and used the initial option to prepopulate the time field with a fixed time. #admin.py ... import datetime class OEventInlineAdminForm(forms.ModelForm): start_time = forms.TimeField(initial=datetime.time(18,30,00)) class Meta: model = OEvent class EventInline(admin.TabularInline): form = EventInlineAdminForm model = Event class SeriesAdmin(admin.ModelAdmin): inlines = [EventInline,] I am not sure how to proceed from here. Is it possible to extend the code, so that the initial value for the start_time field is the Series' default_time?

    Read the article

  • Postback event not firing on FIRST button click..

    - by ClarkeyBoy
    Hi, I have a form which accepts two arguments. The first one is mode - this is either view, new or edit. If it is new then the second argument is type - this is either range, collection or design. When set to new, and the type is valid, a new instance of that type is created and the data from the form is added to it. The item (range, collection or design) then validates the data. If any of the data is invalid then it throws an error, and this error is displayed at the top of the form telling the user why it is invalid. A variable, _Databind, is set to false so that it does not change the data input by the user (in the form fields). The button used to submit the button is called btnSave, and is created in the html source. The click event is wired up in the form Protected Sub Blah(sender, e) Handles btnSave.Click. Strangely, whenever I edit an item that already exists the form submits fine the first time - the click event is fired. However when in "new" mode I have to click the button twice to fire the event. It also blanks all the form fields out on first click. I have even put a Response.Write("Hello World") line at the start of the click event - this is not being output on first click when adding a new item either. It is on first load when the mode is set to edit however. Does anyone have any ideas as to what is causing it to behave this way? Thanks in advance for any help. Regards, Richard

    Read the article

  • Bitwise Interval Arithmetic

    - by KennyTM
    I've recently read an interesting thread on the D newsgroup, which basically asks, Given two signed integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b? I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2n x. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties. Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR? Note: Assume all bitwise operations run in linear time (of number of bits), and test/set a bit is constant time. The brute-force algorithm runs in exponential time. Because ~(a | b) = ~a & ~b and a ^ b = (a | b) & ~(a & b), solving the bitwise-AND and -NOT problem implies bitwise-OR and -XOR are done. Although the content of that thread suggests min{a | b} = max(amin, bmin), it is not the tightest bound. Just consider [2, 3] | [8, 9] = [10, 11].)

    Read the article

  • Robot Simulation in Java

    - by Eddy Freeman
    Hi Guys, I am doing a project concerning robot simulation and i need help. I have to simulate the activities of a robot in a warehouse. I am using mindstorm robots and lego's for the warehouse. The point here is i have to simulate all the activities of the robot on a Java GUI. That is whenever the robot is moving, users have to see it on the GUI a moving object which represents the robot. When the roads/rails/crossings of the warehouse changes it must also be changed on the screen. The whole project is i have to simulate whatever the robot is doing in the warehouse in real-time. Everything must happen in real-time I am asking which libraries in Java i can use to do this simulations in real-time and if someone can also point me to any site for good information. Am asking for libraries in Java that i can use to visualize the simulation in real-time. All suggestions are welcome. Thanks for your help.

    Read the article

  • python - tkinter - update label from variable

    - by Tom
    I wrote a python script that does some stuff to generate and then keep changing some text stored as a string variable. This works, and I can print the string each time it gets changed. Problems have arisen while trying to display that output in a GUI (just as a basic label) using tkinter. I can get the label to display the string for the first time... but it never updates. This is really the first time I have tried to use tkinter, so it's likely I'm making a foolish error. What I've got looks logical to me, but I'm evidently going wrong somewhere! from tkinter import * outputText = 'Ready' counter = int(0) root = Tk() root.maxsize(400, 400) var = StringVar() l = Label(root, textvariable=var, anchor=NW, justify=LEFT, wraplength=398) l.pack() var.set(outputText) while True: counter = counter + 1 #do some stuff that generates string as variable 'result' outputText = result #do some more stuff that generates new string as variable 'result' outputText = result #do some more stuff that generates new string as variable 'result' outputText = result if counter == 5: break root.mainloop() I also tried: from tkinter import * outputText = 'Ready' counter = int(0) root = Tk() root.maxsize(400, 400) var = StringVar() l = Label(root, textvariable=var, anchor=NW, justify=LEFT, wraplength=398) l.pack() var.set(outputText) while True: counter = counter + 1 #do some stuff that generates string as variable 'result' outputText = result var.set(outputText) #do some more stuff that generates new string as variable 'result' outputText = result var.set(outputText) #do some more stuff that generates new string as variable 'result' outputText = result var.set(outputText) if counter == 5: break root.mainloop() In both cases, the label will show 'Ready' but won't update to change that to the strings as they're generated later. After a fair bit of googling and looking through answers on this site, I thought the solution might be to use update_idletasks - I tried putting that in after each time the variable was changed, but it didn't help. It also seems possible I am meant to be using trace and callback somehow to make all this work...but I can't get my head around how that works (I tried, but didn't manage to make anything that even looked like it would do something, let alone actually worked). I'm still very new to both python and especially tkinter, so, any help much appreciated but please spell it out for me if you can :)

    Read the article

  • Actionscript: NetStream stutters after buffering.

    - by meandmycode
    Using NetStream to stream content from http, I've noticed that esp with certain exported h264's, if the player encounters an empty buffer, it will stop and buffer to the requested length (as expected). However once the buffer is full, the playback doesn't resume, but instead jumps ahead, as such- instantly playing the buffered duration in a brief moment, and thusly triggering an empty buffer again.. this will then continue over and over. Presumably when the netstream pauses to buffer, the playhead position continues, and the player is attempting to snap to that position on resume- however given it could take 5 seconds to build a 2 second buffer- it ends up with a useless buffer again.. (this is an assumption) I've attempted to work around this by listening for an empty buffer netstatus event, pausing the stream, and at the same time setting up a loop to check the current buffer length vs the requested buffer length.. and resuming once the buffer length is greater than or equal to the requested buffer.. however this causes problems when there isn't enough of the video remaining.. for example, a 10 second buffer with only 5 seconds remaining, the loop just sits there waiting for a buffer length of 10 seconds when theres only 5 left... You would think that you could simply check which was smaller, the time left or the requested buffer length.. however the times flash gives are not accurate.. If you add the net streams current time index, plus the buffered time, the total is not the entire duration of the movie (when at the end).. it is close but not the same. This brings me back to the original problem, and if there is another way to fix this, clearly flash knows when the buffer is ready, so how can i get flash pause when it buffers, and resume once the buffer is ready? currently it doesn't.. it pauses and then once the buffer is full- it plays the entire buffered content in about .1 of a second. Thanks in advance, Stephen.

    Read the article

  • How to include multiple tables programmaticaly into a Sweave document using R

    - by PaulHurleyuk
    Hello, I want to have a sweave document that will include a variable number of tables in. I thought the example below would work, but it doesn't. I want to loop over the list foo and print each element as it's own table. % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \listfiles \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] library(RODBC) library(psych) library(xtable) library(plyr) library(ggplot2) options(width=80) #Produce some example data, here I'm creating some dummy dataframes and putting them in a list foo<-list() foo[[1]]<-data.frame(GRP=c(rep("AA",10), rep("Aa",10), rep("aa",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[2]]<-data.frame(GRP=c(rep("BB",10), rep("bB",10), rep("BB",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[3]]<-data.frame(GRP=c(rep("CC",12), rep("cc",18)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[4]]<-data.frame(GRP=c(rep("DD",10), rep("Dd",10), rep("dd",10)), X1=rnorm(30), X2=rnorm(30,5,2)) @ \title{Docuemnt to test putting a variable number of tables into a sweave Document} \author{"Paul Hurley"} \maketitle \section{Text} This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx \input{time} sec to process. <<label=test, echo=FALSE, results=tex>>= cat("Foo") @ that was a test, so is this <<label=table1test, echo=FALSE, results=tex>>= print(xtable(foo[[1]])) @ \newpage \subsection{Tables} <<label=Tables, echo=FALSE, results=tex>>= for(i in seq(foo)){ cat("\n") cat(paste("Table_",i,sep="")) cat("\n") print(xtable(foo[[i]])) cat("\n") } #cat("<<label=endofTables>>= ") @ <<label=bye, include=FALSE, echo=FALSE>>= endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= fileConn<-file("time.tex", "wt") writeLines(as.character(elapsedtime), fileConn) close(fileConn) @ \end{document} Here, the table1test chunk works as expected, and produced a table based on the dataframe in foo[[1]], however the loop only produces Table(underscore)1.... Any ideas what I'm doing wrong ?

    Read the article

  • API Wrapper Architecture Best Practice

    - by Adam Taylor
    Hi, So I'm writing a Perl wrapper module around a REST webservice and I'm hoping to have some advice on how best to architect the module. I've been looking at a couple of different Perl modules for inspiration. Flickr::Simple2 - so this is basically one big file with methods wrapping around the different methods in the Flickr API, e.g. getPhotos() etc. Flickr::API - this is a sub-class of another module (LWP) for making HTTP requests. So basically it just allows you to make calls through the module, using LWP, that go to the correct API method/URL without defining any wrapper methods itself. (That's explained pretty poorly - but basically it has a method that takes an argument (a API method name) and constructs the correct API call). e.g request() / response(). An alternative design would be like the first described, but less monolithic, with separate classes for separate "areas" of the API. I'd like to follow modern/best practice Perl methods so I'm using Dist::Zilla to build the module and Moose for the OO stuff but I'd appreciate some input on how to actually design/architect my wrapper. Guides/tutorials or pointers to other well designed modules would be appreciated. Cheers

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • How to switch users in a smooth way in a Point-Of-Sale system?

    - by Sanoj
    I am designing a Point-Of-Sale system for a small shop. The shop just have one Point-Of-Sale but often they are one to three users (sellers) in the shop. Each user have their own user account in the system so they login and logout very often. How should I design the login/logout system in a good way? For the moment the users don't use passwords, because it takes so long time to type the password each time they login. The Platform is Windows Vista but I would like to support Windows 7 too. We use Active Directory on the Network. The system is developed in Java/Swing for the moment, but I'm thinking about to change to C#.NET/WPF. I am thinking about an SmartCard solution, but I don't know if that fits my situation. It would be more secure (which I like) but I don't know if it will be easy to implement and smooth to use, i.e. can I have the POS-system running in the background or started very quickly when the users switch? Are SmartCard solutions very expensive? (My customers are small shops) Is it preferred to use .NET or Java in a SmartCard solution? What other solutions do I have other than passwords/no passwords/smartcards? How should I design the login/logout system in a good way? Is there any good solution using SmartCards for this purpose? I would like suggested solutions both for C#.NET/WPF and Java/Swing platforms. I would like suggested solutions both for Active Directory solutions and solutions that only use one user profile in Windows. How is this problem solved in similar products? I have only seen password-solutions, but they are clumsy.

    Read the article

  • Limit the number of service calls in a RESTful application

    - by Slavo
    Imagine some kind of a banking application, with a screen to create accounts. Each Account has a Currency and a Bank as a property, Currency being a separate class, as well as Bank. The code might look something like this: public class Account { public Currency Currency { get; set; } public Bank Bank { get; set; } } public class Currency { public string Code { get; set; } public string Name { get; set; } } public class Bank { public string Name { get; set; } public string Country { get; set; } } According to the REST design principles, each resource in the application should have its own service, and each service should have methods that map nicely to the HTTP verbs. So in our case, we have an AccountService, CurrencyService and BankService. In the screen for creating an account, we have some UI to select the bank from a list of banks, and to select a currency from a list of currencies. Imagine it is a web application, and those lists are dropdowns. This means that one dropdown is populated from the CurrencyService and one from the BankService. What this means is that when we open the screen for creating an account, we need to make two service calls to two different services. If that screen is not by itself on a page, there might be more service calls from the same page, impacting the performance. Is this normal in such an application? If not, how can it be avoided? How can the design be modified without going away from REST?

    Read the article

  • Date since 1600 to NSDate?

    - by Steven Fisher
    I have a date that's stored as a number of days since January 1, 1600 that I need to deal with. This is a legacy date format that I need to read many, many times in my application. Previously, I'd been creating a calendar, empty date components and root date like this: self.gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier: NSGregorianCalendar ] autorelease]; id rootComponents = [[[NSDateComponents alloc] init] autorelease]; [rootComponents setYear: 1600]; [rootComponents setMonth: 1]; [rootComponents setDay: 1]; self.rootDate = [gregorian dateFromComponents: rootComponents]; self.offset = [[[NSDateComponents alloc] init] autorelease]; Then, to convert the integer later to a date, I use this: [offset setDay: theLegacyDate]; id eventDate = [gregorian dateByAddingComponents: offset toDate: rootDate options: 0]; (I never change any values in offset anywhere else.) The problem is I'm getting a different time for rootDate on iOS vs. Mac OS X. On Mac OS X, I'm getting midnight. On iOS, I'm getting 8:12:28. (So far, it seems to be consistent about this.) When I add my number of days later, the weird time stays. OS | legacyDate | rootDate | eventDate ======== | ========== | ==========================|========================== Mac OS X | 143671 | 1600-01-01 00:00:00 -0800 | 1993-05-11 00:00:00 -0700 iOS | 143671 | 1600-01-01 08:12:28 +0000 | 1993-05-11 07:12:28 +0000 In the previous release of my product, I didn't care about the time; now I do. Why the weird time on iOS, and what should I do about it? (I'm assuming the hour difference is DST.) I've tried setting the hour, minute and second of rootComponents to 0. This has no impact. If I set them to something other than 0, it adds them to 8:12:28. I've been wondering if this has something to do with leap seconds or other cumulative clock changes. Or is this entirely the wrong approach to use on iOS?

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • Using the MVVM Light Toolkit to make Blendable applications

    - by Dave
    A while ago, I posted a question regarding switching between a Blend-authored GUI and a Visual Studio-authored one. I got it to work okay by adding my Blend project to my VS2008 project and then changing the Startup Application and recompiling. This would result in two applications that had completely different GUIs, yet used the exact same ViewModel and Model code. I was pretty happy with that. Now that I've learned about the Laurent Bugnion's MVVM Light Toolkit, I would really like to leverage his efforts to make this process of supporting multiple GUIs for the same backend code possible. The question is, does the toolkit facilate this, or am I stuck doing it my previous way? I've watched his video from MIX10 and have read some of the articles about it online. However, I've yet to see something that indicates that there is a clean way to allow a user to either dynamically switch GUIs on the fly by loading a different DLL. There are MVVM templates for VS2008 and Blend 3, but am I supposed to create both types of projects for my application and then reference specific files from my VS2008 solution? UPDATE I re-read some information on Laurent's site, and seemed to have forgotten that the whole point of the template was to allow the same solution to be opened in VS2008 and Blend. So anyhow, with this new perspective it looks like the templates are actually intended to use a single GUI, most likely designed entirely in Blend (with the convenience of debugging through VS2008), and then be able to use two different ViewModels -- one for design-time, and one for runtime. So it seems to me like the answer to my question is that I want to use a combination of my previous solution, along with the MVVM Light Toolkit. The former will allow me to make multiple, distinct GUIs around my core code, while the latter will make designing fancy GUIs in Blend easier with the usage of a design-time ViewModel. Can anyone comment on this?

    Read the article

  • How to make a GRANT persist for a table that's being dropped and re-created?

    - by Eli Courtwright
    I'm on a fairly new project where we're still modifying the design of our Oracle 11g database tables. As such, we drop and re-create our tables fairly often to make sure that our table creation scripts work as expected whenever we make a change. Our database consists of 2 schemas. One schema has some tables with INSERT triggers which cause the data to sometimes be copied into tables in our second schema. This requires us to log into the database with an admin account such as sysdba and GRANT access to the first schema to the necessary tables on the second schema, e.g. GRANT ALL ON schema_two.SomeTable TO schema_one; Our problem is that every time we make a change to our database design and want to drop and re-create our database tables, the access we GRANT-ed to schema_one went away when the table was dropped. Thus, this creates another annoying step wherein we must log in with an admin account to re-GRANT the access every time one of these tables is dropped and re-created. This isn't a huge deal, but I'd love to eliminate as many steps as possible from our development and testing procedures. Is there any way to GRANT access to a table in such a way that the GRANT-ed permissions survive a table being dropped and then re-created? And if this isn't possible, then is there a better way to go about this?

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • Best fit curve for trend line

    - by Dave Jarvis
    Problem Constraints Size of the data set, but not the data itself, is known. Data set grows by one data point at a time. Trend line is graphed one data point at a time (using a spline/Bezier curve). Graphs The collage below shows data sets with reasonably accurate trend lines: The graphs are: Upper-left. By hour, with ~24 data points. Upper-right. By day for one year, with ~365 data points. Lower-left. By week for one year, with ~52 data points. Lower-right. By month for one year, with ~12 data points. User Inputs The user can select: the type of time series (hourly, daily, monthly, quarterly, annual); and the start and end dates for the time series. For example, the user could select a daily report for 30 days in June. Trend Weight To calculate the window size (i.e., the number of data points to average when calculating the trend line), the following expression is used: data points / trend weight Where data points is derived from user inputs and trend weight is 6.4. Even though a trend weight of 6.4 produces good fits, it is rather arbitrary, and might not be appropriate for different user inputs. Question How should trend weight be calculated given the constraints of this problem?

    Read the article

  • PHP set timeout for script with system call, set_time_limit not working

    - by tehalive
    I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able to set a timeout for killing the script if it goes past 15 seconds for example. I have PHP safemode disabled and tried set_time_limit(15) early in the script, however it continues indefinitely. Update: Thanks to Dor for pointing out this is because set_time_limit() does not respect system() calls. So I was trying to find other ways to kill the script after 15 seconds of execution. However, I'm not sure if it's possible to check the time a script has been running while it's in the middle of a wget request at the same time (a do while loop did not work). Maybe fork a process with a timer and set it to kill the parent after a set amount of time? Thanks for any tips! Update: Below is my relevant code. $url is passed from the command-line and is an array of multiple URLs (sorry for not posting this initially): foreach( $url as $key => $value){ $wget = "wget -r -H -nd -l 999 $value"; system($wget); }

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >