Search Results

Search found 26993 results on 1080 pages for 'multiple insert'.

Page 444/1080 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Are there any design-patterns specifically useful for game-development?

    - by Baelnorn
    This question's been bugging me for a long time. I've always wondered how game developers were solving certain problems or situations that are quite common in certain genres. For example, how would one implement the quests of a typical role-playing game (e.g. BG or TES)? Or how would you implement weapons with multiple stacking effects in a first-person shooter (e.g. the Shrink-gun or Freezer from DN3D)? How would you implement multiple choice options with a possibly intricate decision tree leading to several different outcomes (e.g. the mission trees in WC)? Are there any examples or other resources for that? Blogs? Books? Sourcecode?

    Read the article

  • Is it possible to have an out-of-process COM server where a separate O/S process is used for each ob

    - by Tom Williams
    I have a legacy C++ "solution engine" that I have already wrapped as an in-process COM object for use by client applications that only require a single "solution engine". However I now have a client application that requires multiple "solution engines". Unfortunately the underlying legacy code has enough global data, singletons and threading horrors that given available resources it isn't possible to have multiple instances of it in-process simultaneously. What I am hoping is that some kind soul can tell me of some COM magic where with the flip of a couple of registry settings it is possible to have a separate out-of-process COM server (separate operating system process) for each instance of the COM object requested. Am I in luck?

    Read the article

  • restful way for deleting a bunch of items

    - by keisimone
    In wiki article for REST it is indicated that if you use http://example.com/resources DELETE, that means you are deleting the entire collection. If you use http://example.com/resources/7HOU57Y DELETE, that means you are deleting that element. I am doing a WEBSITE, note NOT WEB SERVICE. I have a list that has 1 checkbox for each item on the list. Once i select multiple items for deletion, i will allow users to press a button called DELETE SELECTION. If user presses the button, a js dialog box will popup asking user to confirm the deletion. if user confirms, all the items are deleted. So how should i cater for deleting multiple items in a RESTFUL way? NOTE, currently for DELETE in a webpage, what i do is i use FORM tag with POST as action but include a _method with the value DELETE since this is what was indicated by others in SO on how to do RESTful delete for webpage.

    Read the article

  • Parameter passing Vs Table Valued Parameters Vs XML to SQL 2008 from .Net Application

    - by Harryboy
    As We are working on a asp .net project there three ways one can update data into database when there are multiple rows updation / insertion required Let's assume we need to update employee education detail (which could be 1,3,5 or 10 records) Method to Update Data Pass value as parameter (Traditional approach), If 10 records are there then 10 round trip required Pass data as xml and write logic inside your stored procedure to get that data from xml and update the table (only single roundtrip required) Use Table valued parameters (only single roundtrip required) Note : Data is available as List, so i need to convert it to xml or any other format if i need to pass. There are no. of places in entire application we need to update data in bulk (or multiple records) I just need your suggestions that Which method will be faster (please mention if there are some other overheads) Manageability or testability concern with any approach Any other bottleneck or issue with any of the approach (Serialization /Deserialization concern or limit on size of the data passing) Any other method you suggest for same operations Thanks

    Read the article

  • Are your personal insecurities screwing up your internal communications?

    - by Lucy Boyes
    I do some internal comms as part of my job. Quite a lot of it involves talking to people about stuff. I’m spending the next couple of weeks talking to lots of people about internal comms itself, because we haven’t done a lot of audience/user feedback gathering, and it turns out that if you talk to people about how they feel and what they think, you get some pretty interesting insights (and an idea of what to do next that isn’t just based on guesswork and generalising from self). Three things keep coming up from talking to people about what we suck at  in terms of internal comms. And, as far as I can tell, they’re all examples where personal insecurity on the part of the person doing the communicating makes the experience much worse for the people on the receiving end. 1. Spending time telling people how you’re going to do something, not what you’re doing and why Imagine you’ve got to give an update to a lot of people who don’t work in your area or department but do have an interest in what you’re doing (either because they want to know because they’re curious or because they need to know because it’s going to affect their work too). You don’t want to look bad at your job. You want to make them think you’ve got it covered – ideally because you do*. And you want to reassure them that there’s lots of exciting work going on in your area to make [insert thing of choice] happen to [insert thing of choice] so that [insert group of people] will be happy. That’s great! You’re doing a good job and you want to tell people about it. This is good comms stuff right here. However, you’re slightly afraid you might secretly be stupid or lazy or incompetent. And you’re exponentially more afraid that the people you’re talking to might think you’re stupid or lazy or incompetent. Or pointless. Or not-adding-value. Or whatever the thing that’s the worst possible thing to be in your company is. So you open by mentioning all the stuff you’re going to do, spending five minutes or so making sure that everyone knows that you’re DOING lots of STUFF. And the you talk for the rest of the time about HOW you’re going to do the stuff, because that way everyone will know that you’ve thought about this really hard and done tons of planning and had lots of great ideas about process and that you’ve got this one down. That’s the stuff you’ve got to say, right? To prove you’re not fundamentally worthless as a human being? Well, maybe. But probably not. See, the people who need to know how you’re going to do the stuff are the people doing the stuff. And those are the people in your area who you’ve (hopefully-please-for-the-love-of-everything-holy) already talked to in depth about how you’re going to do the thing (because else how could they help do it?). They are the only people who need to know the how**. It’s the difference between strategy and tactics. The people outside of your bubble of stuff-doing need to know the strategy – what it is that you’re doing, why, where you’re going with it, etc. The people on the ground with you need the strategy and the tactics, because else they won’t know how to do the stuff. But the outside people don’t really need the tactics at all. Don’t bother with the how unless your audience needs it. They probably don’t. It might make you feel better about yourself, but it’s much more likely that Bob and Jane are thinking about how long this meeting has gone on for already than how personally impressive and definitely-not-an-idiot you are for knowing how you’re going to do some work. Feeling marginally better about yourself (but, let’s face it, still insecure as heck) is not worth the cost, which in this case is the alienation of your audience. 2. Talking for too long about stuff This is kinda the same problem as the previous problem, only much less specific, and I’ve more or less covered why it’s bad already. Basic motivation: to make people think you’re not an idiot. What you do: talk for a very long time about what you’re doing so as to make it sound like you know what you’re doing and lots about it. What your audience wants: the shortest meaningful update. Some of this is a kill your darlings problem – the stuff you’re doing that seems really nifty to you seems really nifty to you, and thus you want to share it with everyone to show that you’re a smart person who thinks up nifty things to do. The downside to this is that it’s mostly only interesting to you – if other people don’t need to know, they likely also don’t care. Think about how you feel when someone is talking a lot to you about a lot of stuff that they’re doing which is at best tangentially interesting and/or relevant. You’re probably not thinking that they’re really smart and clearly know what they’re doing (unless they’re talking a lot and being really engaging about it, which is not the same as talking a lot). You’re probably thinking about something totally unrelated to the thing they’re talking about. Or the fact that you’re bored. You might even – and this is the opposite of what they’re hoping to achieve by talking a lot about stuff – be thinking they’re kind of an idiot. There’s another huge advantage to paring down what you’re trying to say to the barest possible points – it clarifies your thinking. The lightning talk format, as well as other formats which limit the time and/or number of slides you have to say a thing, are really good for doing this. It’s incredibly likely that your audience in this case (the people who need to know some things about your thing but not all the things about your thing) will get everything they need to know from five minutes of you talking about it, especially if trying to condense ALL THE THINGS into a five-minute talk has helped you get clear in your own mind what you’re doing, what you’re trying to say about what you’re doing and why you’re doing it. The bonus of this is that by being clear in your thoughts and in what you say, and in not taking up lots of people’s time to tell them stuff they don’t really need to know, you actually come across as much, much smarter than the person who talks for half an hour or more about things that are semi-relevant at best. 3. Waiting until you’ve got every detail sorted before announcing a big change to the people affected by it This is the worst crime on the list. It’s also human nature. Announcing uncertainty – that something important is going to happen (big reorganisation, product getting canned, etc.) but you’re not quite sure what or when or how yet – is scary. There are risks to it. Uncertainty makes people anxious. It might even paralyse them. You can’t run a business while you’re figuring out what to do if you’ve paralysed everyone with fear over what the future might bring. And you’re scared that they might think you’re not the right person to be in charge of [thing] if you don’t even know what you’re doing with it. Best not to say anything until you know exactly what’s going to happen and you can reassure them all, right? Nope. The people who are going to be affected by whatever it is that you don’t quite know all the details of yet aren’t stupid***. You wouldn’t have hired them if they were. They know something’s up because you’ve got your guilty face on and you keep pulling people into meeting rooms and looking vaguely worried. Here’s the deal: it’s a lot less stressful for everyone (including you) if you’re up front from the beginning. We took this approach during a recent company-wide reorganisation and got really positive feedback. People would much, much rather be told that something is going to happen but you’re not entirely sure what it is yet than have you wait until it’s all fixed up and then fait accompli the heck out of them. They will tell you this themselves if you ask them. And here’s why: by waiting until you know exactly what’s going on to communicate, you remove any agency that the people that the thing is going to happen to might otherwise have had. I know you’re scared that they might get scared – and that’s natural and kind of admirable – but it’s also patronising and infantilising. Ask someone whether they’d rather work on a project which has an openly uncertain future from the beginning, or one where everything’s great until it gets shut down with no forewarning, and very few people are going to tell you they’d prefer the latter. Uncertainty is humanising. It’s you admitting that you don’t have all the answers, which is great, because no one does. It allows you to be consultative – you can actually ask other people what they think and how they feel and what they’d like to do and what they think you should do, and they’ll thank you for it and feel listened to and respected as people and colleagues. Which is a really good reason to start talking to them about what’s going on as soon as you know something’s going on yourself. All of the above assumes you actually care about talking to the people who work with you and for you, and that you’d like to do the right thing by them. If that’s not the case, you can cheerfully disregard the advice here, but if it is, you might want to think about the ways above – and the inevitable countless other ways – that making internal communication about you and not about your audience could actually be doing the people you’re trying to communicate with a huge disservice. So take a deep breath and talk. For five minutes or so. About the important things. Not the other things. As soon as you possibly can. And you’ll be fine.   *Of course you do. You’re good at your job. Don’t worry. **This might not always be true, but it is most of the time. Other people who need to know the how will either be people who you’ve already identified as needing-to-know and thus part of the same set as the people in you’re area you’ve already discussed this with, or else they’ll ask you. But don’t bring this stuff up unless someone asks for it, because most of the people in the audience really don’t care and you’re wasting their time. ***I mean, they might be. But let’s give them the benefit of the doubt and assume they’re not.

    Read the article

  • Multitenant shared user account?

    - by jpartogi
    Dear all, Based on your experience, which is the route to go for a multi-tenant user login? One user login per account. Which means if there is one user that has access to multiple account, there will be redundancy of record in the database One user login for all account that she has privileges to. Which means one user record has access to multiple account if she has privileges to that account. From your experience, which one is better and why? I was thinking to choose the latter, but I don't know whether it will cause security issue or less flexibility. Thank you for sharing your experience.

    Read the article

  • declaring constraint to consider prog logic

    - by shantanuo
    I can open a trip only once but can close it multiple times. I can not declare the Trip_no + status as primary key since there can be multiple entries while closing the trip. Is there any way that will assure me that a trip number is opened only once? For e.g. there should not be the second row with "Open" status for trip No. 3 since it is already there in the following table. Trip No | Status 1 Open 1 Close 1 Close 2 Open 2 Close 3 Open 3 Close 3 Close 3 Close 3 Close

    Read the article

  • Using thrift to mix development languages

    - by christopher-mccann
    I am currently developing an application which will require multiple different development languages. I want to use PHP as the final piece of the puzzle - the physical web page construction. This PHP web app will need to contact multiple web services which could be coded in anything from Java to Erlang to Python. Each of these web services will be implemented with an API. My plan is to use Thrift to allow this mix to work. Is this the correct approach or am I mixing up what the whole point of Thrift is?

    Read the article

  • Select options value and text to JSON format

    - by uzay95
    How can i convert select options elements to JSON text. <select id="mySelect" multiple="multiple"> <option value="1">First</option> <option value="2">Second</option> <option value="3">Third</option> <option value="4">Fourth</option> </select> I want to set options belong to select which is not runat=server to the And I want to split string to array.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • [C#] How to consume web service adheres to the Event-based Asynchronous Pattern?

    - by codemonkie
    I am following the example from http://msdn.microsoft.com/en-us/library/8wy069k1.aspx to consume a web service implemented (by 3rd party) using the Event-based Asynchronous Pattern. However, my program needs to do multiple calls to the DoStuffAsync() hence will get back as many DoStuffCompleted. I chose the overload which takes an extra parameter - Object userState to distinguish them. My first question is: Is it valid to cast a GUID to Object as below, where GUID is used to generate unique taskID? Object userState = Guid.NewGuid(); Secondly, do I need to spawn off a new thread for each DoStuffAsync() call, since I am calling it multiple times? Also, would be nice to have some online examples or tutorials on this subject. (I've been googling for it the whole day and didn't get much back) Many thanks

    Read the article

  • PHP - Loop thru recordset and fire event each n rows

    - by Luciano
    I'm looking for the right logic to loop thru a recordset and fire an event each n times. Searching on Google i've found some discussion on similar situations, but it seems that solutions don't fits my needs. Let's say i have a recordset of 22 rows. I want to loop thru each row and launch a function on the 4th, the 8th, the 12th and so on... Using the modulus operator as shown in this answer, if($i % 4 == 0), i get the event fired each 4 rows, but 22 its not a multiple of 4 so the event is fired till the 20th row and then nothing. Maybe i need to make a division counting rows in 'excess'? Since the recordset will be between 50 and 200 rows i think its not necessary run multiple query of 4 rows, am I wrong? Thanks in advance!

    Read the article

  • Hardware-specific questions

    - by overflow
    I'm good at programming yet I feel like I don't know enough about the architecture of the hardware I'm working on. What does the Northbridge on the mainboard do? What does the L2 cache of my processor do? Can Windows XP use multiple processors? Not in terms of concrete multitasking in all programs but using the capacity of all cores if needed instead of always only one core. How can my processor/mainboard interact with multiple kinds of graphics/sound cards?

    Read the article

  • Does php mvc framework agavi use CRUD compliant to REST?

    - by txwikinger
    The agavi framework uses the PUT request for create and POST for updating information. Usually in REST this is used the other way around (often referring to POST adding information while PUT replacing the whole data record). If I understand it correctly, the important issue is that PUT must be idempotent, while POST does not have this requirement. Therefore, I wounder how creating a new record can be idempotent (i.e. multiple request do not lead to multiple creations of a record) in particular when usually the ORM uses an id as a primary key and the id of a new record would not be known to the client (since it is autocreated in the database), hence cannot be part of the request. How does agavi maintain the requirement of idempotence in light of this for the PUT request. Thanks.

    Read the article

  • Recommendations on a WPF Docking Library

    - by Brian Stewart
    We are implementing an application that needs dockable windows, similar to Visual Studio 2005/2008, but with multiple "docking sites", unlike VS's single one. Does anyone have a recommendation on a good library for this - either OSS or commercial? I am aware that Infragistics has one, as well as Divelement's SandDock and WPF-Dock from DevComponents, as well as ActiPro's Docking & MDI product. There is also one on CodeProject. Has anyone used any of these libraries? Was the experience good or bad? If you have experience with one of them, does it support multiple "docking sites"?

    Read the article

  • google docs + web app

    - by King
    Hi Guys I am trying to create a web app to share docs with all editor features (just like google docs). My main requirements for this app are as follows: 1. Should have all editor features (can be done using open office api, google docs api, Microsoft office web apps api) 2. Should be shared between multiple users and can be edited by multiple users and other sync features (can be done using google docs api, Microsoft office web apps) 3. Can save the document created and edited on my own/ custom server addr. (Which api can support this??? I know open office can support this) Guys can you please suggest me one api which can be used to do all the above. Also please suggest if I am underestimating any API above regarding any functionality that i thing is not supported. Thanks King

    Read the article

  • Grasp Controller, Does it really need a UI to exist?

    - by dbones
    I have a Domain model which can be in multiple states, and if these states go out of a given range it the domain should automatically react. For example I have a Car which is made of multiple things which have measurements the Engine - Rev counter and Temperature the Fuel Tank - capacity Is is plaseable to have a CarStateController, which observses the engine and the tank, and if these states go out of range IE the engine temperature goes above range, turn the engine fan on?? There is no UI, (you could argue it would show a light on the dash board, but for this case it does not) is this a valid use of a GRASP controller pattern? if not what is this CarStateController Called? Or have I completely missed the point and this should be the State Pattern?

    Read the article

  • logging problems for swing applications

    - by mengmenger
    what kind of logging frame work or API to use for swing applications which is used by multiple users in Unix. Is it possible to log all verbose/exception in one file per day or event one user one file per day? Since the user can open the same application with multiple instance. I also have another solution is to save the exceptions into database. But if I miss the excetpions, those will not be saved in DB. anybody has better solutions? Thank you very much!

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • phing nested if conditions

    - by Matt1776
    Hello - I am having trouble understanding the Phing documentation regarding multiple conditions for a given tag. It implies you cannot have multiple conditions unless you use the tag, but there are no examples of how to use it. Consequently I nested two tags, however I feel silly doing this when I know there is a better way. Does anyone know how I can use the tag to accomplish the following: <if><equals arg1="${deployment.host.type}" arg2="unrestricted" /><then> <if><equals arg1="${db.adapter}" arg2="PDO_MYSQL"/><then> <!-- Code Here --> </then></if> </then></if>

    Read the article

  • C++ threaded class design from non-threaded class

    - by macs
    I'm working on a library doing audio encoding/decoding. The encoder shall be able to use multiple cores (i.e. multiple threads, using boost library), if available. What i have right now is a class that performs all encoding-relevant operations. The next step i want to take is to make that class threaded. So i'm wondering how to do this. I thought about writing a thread-class, creating n threads for n cores and then calling the encoder with the appropriate arguments. But maybe this is an overkill and there is no need for another class, so i'm going to make use of the "user interface" for thread-creation. I hope there are any suggestions.

    Read the article

  • Font choices in International scenarios: multilingual vs unicode

    - by TravisO
    I have a website that will eventually display multiple languages. I notice the common fonts used in web CSS (ex: Arial, Verdana, Times New Roman, Tahoma) and even the newer Vista/Office 2007/VS2008 fonts (Calibri,Cambria, Candara, Corbel, etc) are significantly larger (~350K) than your average (US only?) TTF font (~50k) so these fonts contain most/all the major character sets that common languages (Spanish, French, German, etc) use. My question is, would somebody confirm that these fonts listed above are acceptable for international use of the major (let's say top 8) spoken languages? If so, then I'm guessing the only purpose of unicode fonts; such "Arial Unicode" (a massive 22mb) is only for dealing with extremely niche dialog, eastern glyphs (Chinese, Japanese) and dead languages? I'm just looking for some confirmation from developers that have their desktop apps/web apps rendering multiple languages and have a visual confirmation, I'm already in the 99% sure bin but you know what they say about assumption.

    Read the article

  • Saving a Join Model

    - by Thorpe Obazee
    I've been reading the cookbook for a while now and still don't get how I'm supposed to do this: My original problem was this: A related Model isn't being validated From RabidFire's commment: If you want to count the number of Category models that a new Post is associated with (on save), then you need to do this in the beforeSave function as I've mentioned. As you've currently set up your models, you don't need to use the multiple rule anywhere. If you really, really want to validate against a list of Category IDs for some reason, then create a join model, and validate category_id with the multiple rule there. Now, I have these models and are now validating. The problem now is that data isn't being saved in the Join Table: class Post extends AppModel { var $name = 'Post'; var $hasMany = array( 'CategoryPost' => array( 'className' => 'CategoryPost' ) ); var $belongsTo = array( 'Page' => array( 'className' => 'Page' ) ); class Category extends AppModel { var $name = 'Category'; var $hasMany = array( 'CategoryPost' => array( 'className' => 'CategoryPost' ) ); class CategoryPost extends AppModel { var $name = 'CategoryPost'; var $validate = array( 'category_id' => array( 'rule' => array('multiple', array('in' => array(1, 2, 3, 4))), 'required' => FALSE, 'message' => 'Please select one, two or three options' ) ); var $belongsTo = array( 'Post' => array( 'className' => 'Post' ), 'Category' => array( 'className' => 'Category' ) ); This is the new Form: <div id="content-wrap"> <div id="main"> <h2>Add Post</h2> <?php echo $this->Session->flash();?> <div> <?php echo $this->Form->create('Post'); echo $this->Form->input('Post.title'); echo $this->Form->input('CategoryPost.category_id', array('multiple' => 'checkbox')); echo $this->Form->input('Post.body', array('rows' => '3')); echo $this->Form->input('Page.meta_keywords'); echo $this->Form->input('Page.meta_description'); echo $this->Form->end('Save Post'); ?> </div> <!-- main ends --> </div> The data I am producing from the form is as follows: Array ( [Post] => Array ( [title] => 1234 [body] => 1234 ) [CategoryPost] => Array ( [category_id] => Array ( [0] => 1 [1] => 2 ) ) [Page] => Array ( [meta_keywords] => 1234 [meta_description] => 1234 [title] => 1234 [layout] => index ) ) UPDATE: controller action //Controller action function admin_add() { // pr(Debugger::trace()); $this->set('categories', $this->Post->CategoryPost->Category->find('list')); if ( ! empty($this->data)) { $this->data['Page']['title'] = $this->data['Post']['title']; $this->data['Page']['layout'] = 'index'; debug($this->data); if ($this->Post->saveAll($this->data)) { $this->Session->setFlash('Your post has been saved', 'flash_good'); $this->redirect($this->here); } } } UPDATE #2: Should I just do this manually? The problem is that the join tables doesn't have things saved in it. Is there something I'm missing? UPDATE #3 RabidFire gave me a solution. I already did this before and am quite surprised as so why it didn't work. Thus, me asking here. The reason I think there is something wrong. I don't know where: Post beforeSave: function beforeSave() { if (empty($this->id)) { $this->data[$this->name]['uri'] = $this->getUniqueUrl($this->data[$this->name]['title']); } if (isset($this->data['CategoryPost']['category_id']) && is_array($this->data['CategoryPost']['category_id'])) { echo 'test'; $categoryPosts = array(); foreach ($this->data['CategoryPost']['category_id'] as $categoryId) { $categoryPost = array( 'category_id' => $categoryId ); array_push($categoryPosts, $categoryPost); } $this->data['CategoryPost'] = $categoryPosts; } debug($this->data); // Gives RabidFire's correct array for saving. return true; } My Post action: function admin_add() { // pr(Debugger::trace()); $this->set('categories', $this->Post->CategoryPost->Category->find('list')); if ( ! empty($this->data)) { $this->data['Page']['title'] = $this->data['Post']['title']; $this->data['Page']['layout'] = 'index'; debug($this->data); // First debug is giving the correct array as above. if ($this->Post->saveAll($this->data)) { debug($this->data); // STILL gives the above array. which shouldn't be because of the beforeSave in the Post Model // $this->Session->setFlash('Your post has been saved', 'flash_good'); // $this->redirect($this->here); } } }

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >