Search Results

Search found 6169 results on 247 pages for 'future proof'.

Page 209/247 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • How can I prevent text displacement for some foreign language fonts?

    - by weltraumpirat
    I have a multilingual project (currently 13 languages), which uses many different font variations of "Helvetica Neue", mostly bold, condensed and regular cuts from the LinoType Pro font set ( which includes western european characters) and the same for cyrillic. We will probably add chinese and japanese variations in the future. I have set up the project to use different CSS stylesheets and separately load the fonts for each version, depending on which language the user selects, so I can have different line heights, kerning and/or font sizes to make everything keep the original look, even if the fonts look nothing alike. All of this works well, except for one problem: For some reason, all cyrillic letters seem to be displaced. They appear 2-3 pixels below the correct base line, and actually protrude across the textfield's bottom border, even when the field is set to autosize. When I use textfield.getCharBoundaries(), all values seem to be correct, even though they obviously aren't rendered correctly. To make everything look neat, I could of course manually move all problematic textfields up or down according to language and font size, but I was wondering if there was some way to prevent or at least detect this kind of displacement in order to automatically handle the adjustments - the Flash Player should have some sort of information on how things are rendered, shouldn't it? Have any of you had similar problems? Or better yet: a solution?

    Read the article

  • RPC command to initiate a software install

    - by ericmayo
    I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products. The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite. Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software. I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished. My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install. What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message. To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program. My programming language is C/C++ Any thoughts/suggestions appreciated.

    Read the article

  • Java extends classes - Share the extended class fields within the super class.

    - by Bastan
    Straight to the point... I have a class public class P_Gen{ protected String s; protected Object oP_Gen; public P_Gen(String str){ s = str; oP_Gen = new Myclass(this); } } Extended class: public class P extends P_Gen{ protected Object oP; public P(String str){ oP = new aClass(str); super(str); } } MyClass: public class MyClass{ protected Object oMC; public MyClass(P extendedObject){ oMc = oP.getSomething(); } } I came to realize that MyClass can only be instantiated with (P_Gen thisObject) as opposed to (P extendedObject). The situation is that I have code generated a bunch of classes like P_Gen. For each of them I have generated a class P which would contains my P specific custom methods and fields. When I'll regenerate my code in the future, P would not be overwritten as P_Gen would. ** So what happened in my case???!!!... I realized that MyClass would beneficiate from the info stored in P in addition to only P_Gen. Would that possible? I know it's not JAVA "realistic" since another class that extends P_Gen might not have the same fields... BY DESIGN, P_Gen will not be extended by anything but P.... And that's where it kinda make sens. :-) at least in other programming language ;-) In other programming language, it seems like P_Gen.this === P.this, in other word, "this" becomes a combination of P and P_Gen. Is there a way to achieve this knowing that P_Gen won't be extended by anything than P?

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

  • Factorial function - design and test.

    - by lukas
    I'm trying to nail down some interview questions, so I stared with a simple one. Design the factorial function. This function is a leaf (no dependencies - easly testable), so I made it static inside the helper class. public static class MathHelper { public static int Factorial(int n) { Debug.Assert(n >= 0); if (n < 0) { throw new ArgumentException("n cannot be lower that 0"); } Debug.Assert(n <= 12); if (n > 12) { throw new OverflowException("Overflow occurs above 12 factorial"); } //by definition if (n == 0) { return 1; } int factorialOfN = 1; for (int i = 1; i <= n; ++i) { //checked //{ factorialOfN *= i; //} } return factorialOfN; } } Testing: [TestMethod] [ExpectedException(typeof(OverflowException))] public void Overflow() { int temp = FactorialHelper.MathHelper.Factorial(40); } [TestMethod] public void ZeroTest() { int factorialOfZero = FactorialHelper.MathHelper.Factorial(0); Assert.AreEqual(1, factorialOfZero); } [TestMethod] public void FactorialOf5() { int factOf5 = FactorialHelper.MathHelper.Factorial(5); Assert.AreEqual(120,factOf5); } [TestMethod] [ExpectedException(typeof(ArgumentException))] public void NegativeTest() { int factOfMinus5 = FactorialHelper.MathHelper.Factorial(-5); } I have a few questions: Is it correct? (I hope so ;) ) Does it throw right exceptions? Should I use checked context or this trick ( n 12 ) is ok? Is it better to use uint istead of checking for negative values? Future improving: Overload for long, decimal, BigInteger or maybe generic method? Thank you

    Read the article

  • The "correct" way of using multilingual support

    - by Felipe Athayde
    I just began working with ASP.NET and I'm trying to bring with me some coding standards I find healthy. Among such standards there is the multilingual support and the use of resources for easily handling future changes. Back when I used to code desktop applications, every text had to be translated, so it was a common practice to have the language files for every languages I would want to offer to the customers. In those files I would map every single text, from button labels to error messages. In ASP.NET, with the help of Visual Studio, I have the resort of using the IDE to generate such Resource Files (from Tools - Generate Local Resource), but then I would have to fill my webpages with labels - at least that is what I've learned from articles and tutorials. However, such approach looks a bit odd and I'm tempted to guess it doesn't smell that good as well. Now to the question: 1) Should I keep every single text in my website as labels and manage its contents in the resource files? It looks/feels odd specially when considering a text with several paragraphs. 2) Whenever I add/remove something, e.g.: a button, to an aspx file I would have to add it to the resource file as well, because generating the resource file again would simply override all my previous changes to it. That doesn't feel like a reusable code at all for me. Any comment suggestion on this one? Perhaps I got it all wrong from tutorials as it doesn't seem like a standardized matter - specially if it required recompiling the entire application whenever some change has to be done.

    Read the article

  • Develop multiple very similar projects at once

    - by Raveren
    I am developing a semi-complicated site that is available in several countries at once. Much effort has been put in to make the code bases as similar as possible to one another and ultimately only the config file and some representational data will differ between them. Each project has its own SVN repository which maps directly to a live test site. That part is handled by the IDE we use to work. Now I am in need to create a some sort of system to keep all these projects in sync. The best theoretical solution so far is to create a local hook script that would fire on committing and Merge the committed files from the project that is being committed to all other projects Optionally upload them to the live site, replacing previous files The first problem is that I don't know how I would do the merging - I guess it would be like applying a SVN patch or something. The second is if I do not want to upload the changes to the live server, how would I go about synching the live and local code bases (replace older files?). I am posting this question, not going through the potentially huge trouble of solving the aforementioned problems myself is that I believe this is a pretty common situation and someone would already have a solution and others may benefit from the answers in the future. Lastly, I'm on windows7, develop PHP and use tortoiseSVN.

    Read the article

  • Please recommend the one SQL book for a developer without a lot of SQL experience.

    - by Hamish Grubijan
    I have too many hobbies outside of my profession, so I am hoping to read just one good book, and get a tad better at SQL. My background: took one boring, theoretical class in databases, was exposed to SQL professionally (in addition to several other languages and technologies) for a year and a half. I've done about 5 years of C#/Java stuff professionally. By "professionally" I mean doing it full-time while someone paid me more than $25/hr for it - not necessarily that I created masterpieces along the way :) I want to become better at SQL (coding aspect; DBA is not of particular importance to me right now). I am looking for one book to give me a solid foundation in it. When I needed to learn some C from almost a scratch, I used (and loved) this book: http://www.amazon.com/Programming-Language-2nd-Brian-Kernighan/dp/0131103628 I am hoping to find one just like this for SQL. I am not doing web development now or in a near future, and I am looking for something that is hopefully not specific to any one sub-industry. Thanks in advance.

    Read the article

  • How do I organize C# classes that inherit from one another, but also have properties that inherit from one another?

    - by Chris
    I have an application that has a concept of a Venue, a place where events happen. A Venue is owned by a Company and has many VenueParts. So, it looks like this: public abstract class Venue { public int Id { get; set; } public string Name { get; set; } public virtual Company Company { get; set; } public virtual ICollection<VenuePart> VenueParts { get; set; } } A Venue can be a GolfCourseVenue, which is a Venue that has a Slope and a specific kind of VenuePart called a HoleVenuePart: public class GolfCourseVenue { public string Slope { get; set; } public virtual ICollection<HoleVenuePart> Holes { get; set; } } In the future, there may also be other kinds of Venues that all inherit from Venue. They might add their own fields, and will always have VenueParts of their own specific type. My declarations above seem wrong, because now I have a GolfCourseVenue with two collections, when really it should just have the one. I can't override it, because the type is different, right? When I run reports, I would like to refer to the classes generically, where I just spit out Venues and VenueParts. But, when I render forms and such, I would like to be specific. I have a lot of relationships like this and am wondering what I am doing wrong. For example, I have an Order that has OrderItems, but also specific kinds of Orders that have specific kinds of OrderItems.

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • Deal with update location for click-once.

    - by Assimilater
    I'm not sure how many people here are experts with visual studios, but I'd imagine a handful (not to raise expectations but to appeal to your egos :P). I'm working primarily in visual basic for now (though I hope to switch to c# in the near future and maybe a java or web app). Basically I'm trying to create an update feature that will work similarly to how common programs such as firefox or itunes update automatically. There is supposed to be provided functionality for this in what is called click once. I carry out the following procedures and get the following errors when trying to change the update url of my program to a password-protected ftp location. Go to project properties Go to publish click updates click browse click FTP Site Under Server put: web###.opentransfer.com Under Port: 21 Under Directory put: CMSOFT Passive mode is selected (which is what filezilla tells me the server is accessed with) Anonymous User is unselected and a username and password are typed in Push Ok Under Update location it shows: ftp://web###.opentransfer.com/CMSOFT I push Ok I see a message box titled Microsoft Visual Basic 2010 Express with an x icon Publish.UpdateUrl: The string must be a fully qualified URL or UNC path, for example "http://www.microsoft.com/myapplication" or "\server\myapplication". I've tried changing the directory to "CMSOFT/PQCM.exe" and the results are the same...hope this was descriptive enough.

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • Toolbox/framework to construct lightweight public-facing web site

    - by aSteve
    I am aware of full-blown content management systems (CMS) such as SugarCRM and TikiWiki... where content is typically stored in a database... and edited through the same interface as it is published. While I like many of the features, the product is clearly aimed at enterprise-wide use rather than to be public-facing. What I'd like to establish are potential alternatives that fill the space between full-blown CMS and hand-coded bespoke site. I like the way that I can add modules to my CMS... allowing me to quickly introduce new functionality, and I'd like an analogous feature in a system for public web-content. Modules I know I'd like include moderated comments; web-form-to-email gateway; menus/tabs... in future, perhaps mapping or diaries or RSS integration - etc. Where my requirements differ from a CMS, I don't need (or want) most content to be editable through the main site... and, somehow, I do want to be able to preview how updates will be presented to the public rather than to make live changes. For these purposes, in contrast to those where a typical CMS would be ideal, presentation is of paramount importance - and trumps any desire to immediately disseminate information. I realise that this is a very high-level question... (suggestions of additional tags welcome) - I mentioned PHP only as - ideally - I'm looking for an open source solution and a PHP deployment is an easy option. What are my options?

    Read the article

  • CSS: Why is my floated <span> being displayed below an <a>nchor in IE6/7 but not IE8/FF

    - by gsquare567
    i'm getting this weird CSS bug in ie6/7 (but not in ie8 or firefox): for some reason, my nchor and , two inline elements, which are on the same line, are being displayed on different lines. the span is floating to the right, too! heres the HTML: <div class="sidebartextbg"><a href="journey.php" style="width:50%" title="Track past, present and future milestones during your employment">Journey</a> <span class="notificationNumber">2</span> <!-- JOURNEY COUNT: end --> </div> and here's the CSS: .sidebartextbg { background:url("../images/sidebartextbg.gif") repeat-x scroll 0 0 transparent; border-bottom:1px solid #A3A88B; font-size:14px; line-height:18px; margin:0 auto; padding:5px 9px; width:270px; } .notificationNumber { background:url("../images/oval_edges.gif") no-repeat scroll 0 0 transparent; color:#FFFFFF; float:right; padding:0 7px; position:relative; text-align:center; width:17px; } so: why would the floated span be displayed on the line under the nchor? thanks!

    Read the article

  • How to build a dynamic resize-able Flash player

    - by Leon
    Morning stackers! So my question today isn't dealing with any code, but how to go about this the correct way from the start. I have a video player built to a static size (max: 800x600) which I'll have to re-code every time I need it to be a different size. What I need it to do in the near future is dynamically resize itself and all the elements inside of it based on 1 width variable that it will received either from HTML or XML. Now to me there are 2 ways to go about this: Start with the smallest size possible and resize upwards, but I'm unsure of how the Flash movie will actually expand upwards as of right now. Or 2, start with the largest size possible (in this case 800x600) and size everything down. Step 1, I think seems to be the better way to go about this (ala YouTube style), but Step 2 also seems like it could be the easier way? A friend of mine mentioned that I should go with the larger size and have elements resize in each class, then fix to the upper left hand corner. However for the player to fit inside of certain div columns on sites, blogs whatever he said that there will have to be an HTML/CSS side of this... meaning that the div containing the resized flash player will have to cover up the areas of the Flash movie that are not to be shown? Is that possible to put a 800x600 flash movie into a div that smaller then 800 pixels wide? And cover it up with another div? Anyways, my mission is to be able to have a dynamically sized player like this: Thoughts? Recommendations? Best practices for this before I start?

    Read the article

  • Speed comparison - Template specialization vs. Virtual Function vs. If-Statement

    - by Person
    Just to get it out of the way... Premature optimization is the root of all evil Make use of OOP etc. I understand. Just looking for some advice regarding the speed of certain operations that I can store in my grey matter for future reference. Say you have an Animation class. An animation can be looped (plays over and over) or not looped (plays once), it may have unique frame times or not, etc. Let's say there are 3 of these "either or" attributes. Note that any method of the Animation class will at most check for one of these (i.e. this isn't a case of a giant branch of if-elseif). Here are some options. 1) Give it boolean members for the attributes given above, and use an if statement to check against them when playing the animation to perform the appropriate action. Problem: Conditional checked every single time the animation is played. 2) Make a base animation class, and derive other animations classes such as LoopedAnimation and AnimationUniqueFrames, etc. Problem: Vtable check upon every call to play the animation given that you have something like a vector<Animation>. Also, making a separate class for all of the possible combinations seems code bloaty. 3) Use template specialization, and specialize those functions that depend on those attributes. Like template<bool looped, bool uniqueFrameTimes> class Animation. Problem: The problem with this is that you couldn't just have a vector<Animation> for something's animations. Could also be bloaty. I'm wondering what kind of speed each of these options offer? I'm particularly interested in the 1st and 2nd option because the 3rd doesn't allow one to iterate through a general container of Animations. In short, what is faster - a vtable fetch or a conditional?

    Read the article

  • What is a mantainable way of saving "star rating" in a database?

    - by Montecristo
    I'll use the jQuery plugin for presenting the user with a nice interface The request is to display 5 stars, up to a total score of 10 (2 points per star). By now I thought about using 7/10 as a format for that value, but what if at some point in the future I'll receive a request like We would like to give users more choice, let's increase the total score to 20 (so that each star contributes with a maximum of 4 points) I'll end up with a table with mixed values for the "star rating" column: some will be like 7/10 while others will be like 14/20. Is it ok for you to have this difference in the database and deal with it in the logic layer to have it consistent? Or is preferred another way so that querying the table will not result in inconsistent results outside the application? Maybe floating point values could help me, is it better to store that value as a number less than or equal to one? So in each of the two examples the resulting value stored in the database would be 0,7, as a number, not a varchar, which can be queried also outside the application. What do you think?

    Read the article

  • CodeIgnither OAuth 2.0 database setup for users and access_tokens

    - by xref
    Per this question I am using CodeIgniter and OAuth 2 in an attempt to provide SSO for internal users of my webapp, ideally verifying them against their Google Apps account. No registrations or anything, just existing users. Using the CI oauth2 spark I'm getting back from Google an OAuth token similar to below: OAuth2_Token_Access Object ( [access_token:protected] => dp83.AHSDj899sDHHD908DHFBDjidkd8989dDHhjjd [expires:protected] => 1349816820 [refresh_token:protected] => [uid:protected] => ) And using that token I can retrieve some user info from Google: [uid] => 3849450385394595 [nickname] => this_guy [name] => This Guy [first_name] => This [last_name] => Guy [email] => [email protected] [location] => [image] => [description] => [urls] => Array ( ) Now to allow the 15 people or so who will be using the webapp currently to log in, do I need to create a users table in the mysql database with their email address as a key? Then compare the email which just came back from the Google OAuth request and see if it exists in my users table? What about the Google access_token, do I store that now along with the email which already existed in the users table? Related: How would I go about verifying the user automatically in the future against that access_token so they don't have to go through the whole OAuth approval process with Google again?

    Read the article

  • Can someone recommend a good tutorial on MySQL indexes, specifically when used in an order by clause

    - by Philip Brocoum
    I could try to post and explain the exact query I'm trying to run, but I'm going by the old adage of, "give a man a fish and he'll eat for a day, teach a man to fish and he'll eat for the rest of his life." SQL optimization seems to be very query-specific, and even if you could solve this one particular query for me, I'm going to have to write many more queries in the future, and I'd like to be educated on how indexes work in general. Still, here's a quick description of my current problem. I have a query that joins three tables and runs in 0.2 seconds flat. Awesome. I add an "order by" clause and it runs in 4 minutes and 30 seconds. Sucky. I denormalize one table so there is one fewer join, add indexes everywhere, and now the query runs in... 20 minutes. What the hell? Finally, I don't use a join at all, but rather a subquery with "where id in (...) order by" and now it runs in 1.5 seconds. Pretty decent. What in God's name is going on? I feel like if I actually understood what indexes were doing I could write some really good SQL. Anybody know some good tutorials? Thanks!

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • Simple HTML interface to XSD?

    - by Visage
    I'm writing an app that, at its heart, uses a hierarchical tree of nodes in XML, it looks like this: <node> <name>Node1</name> <Attribute1>Something</Attribute1> <Attribute2>SomethingElse</Attribute2> <child>Node2</child> <child>Node4</child> <child>Node7</child> </node> And so on (all child elements must refer to an existing node, though the node inquestion doesnt have to precede the first reference to it) For a simple structure like this is there a simple tool to generate a html page that will allow a user to enter Nodes and dynamically update a server-side xml file? Im basically writing a tool that will use such a file, but the people who's job it is to create the file arent especially techno-literate, so creating the XML by hand is a no-no. I could hand-crank one fairly quickly, but if I can get a tool to do it, even better (especially as the format may change in future)....

    Read the article

  • Keeping user data persistent after validates_presence_of

    - by mathee
    I'm designing a question-and-answer Ruby on Rails application. After a user logs in, you can see a list of questions posed by other users. I have a link next to each of the questions to /answers/new?question_id=someNumber. That links to a page that displays the question (to remind the "answerer") above a standard form for submitting your answer. In order to display the question, I call @question = Question.find_by_id(params[:question_id]) and reference @question in the Haml view file: Question details %h2 #{h @question.title} #{h @question.description} %br/ %br/ %h1 Your answer - form_for(@answer) do |f| = f.error_messages %p = f.label :description, "Enter your response here" %br = f.text_area :description = f.hidden_field "question", :value => @question.id %p = f.submit 'Answer' The problem is that if I check validates_presence_of :description in Answer.rb, then I lose question_id if the user did not input anything into the description field when the page reloads, so I can't re-display the question for which the user is answering. How should I fix this? Is there a better way of storing the question the user is trying to answer so that I can display it above the form for entering a new answer (and perhaps in other views in the future)? If you need to see more code, please let me know.

    Read the article

  • Wait between tasks with SingleThreadExecutor

    - by Lord.Quackstar
    I am trying to (simply) make a blocking thread queue, where when a task is submitted the method waits until its finished executing. The hard part though is the wait. Here's my 12:30 AM code that I think is overkill: public void sendMsg(final BotMessage msg) { try { Future task; synchronized(msgQueue) { task = msgQueue.submit(new Runnable() { public void run() { sendRawLine("PRIVMSG " + msg.channel + " :" + msg.message); } }); //Add a seperate wait so next runnable doesn't get executed yet but //above one unblocks msgQueue.submit(new Runnable() { public void run() { try { Thread.sleep(Controller.msgWait); } catch (InterruptedException e) { log.error("Wait to send message interupted", e); } } }); } //Block until done task.get(); } catch (ExecutionException e) { log.error("Couldn't schedule send message to be executed", e); } catch (InterruptedException e) { log.error("Wait to send message interupted", e); } } As you can see, there's alot of extra code there just to make it wait 1.7 seconds between tasks. Is there an easier and cleaner solution out there or is this it?

    Read the article

  • Run Reporting Service in local mode and generate columns automatically?

    - by grady
    Hi, I have a SQL query right now which I want to use with the MS reporting services in my ASP.NET application. So I created a report in local mode (rdlc) and attached this to a report viewer. Since my query uses parameters, I created a stored procedure, which had exactly those parameters. In addition to that I had some textboxes which are used for entering the params for the query and a button to call the stored proc and to fill the datatset which is bound to the report viewer. This works, I press the button and according to what I entred the correct data is shown. Now my question: In the future I plan to have multiple reports (which will be selected in a dropdown) and I wonder if I can somehow just call the correct stored procedure and according to the columns which are *SELECT*ed in the procedure, the columns are shown in the report. Example: I select report1 from the dropdown (procedure for report 1 is called), 5 columns are shown in the reportviewer. I select report2 from dropdown (procedure for report 2 is called), 8 columns are shown. Is that possible somehow? Thanks :-)

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >