Search Results

Search found 18677 results on 748 pages for 'current'.

Page 612/748 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Designing small comparable objects

    - by Thomas Ahle
    Intro Consider you have a list of key/value pairs: (0,a) (1,b) (2,c) You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order: (0,a) (0.5,z) (1,b) (2,c) Here the new key was chosen as the average between the average of keys of the bounding pairs. The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class. The problem How can you design a system for generating keys that: Gives the correct result (larger/smaller than) when compared to all the rest of the keys. Takes up only O(logn) memory (where n is the number of items in the list). My tries First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts. Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison. Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.

    Read the article

  • Enumeration trouble: redeclared as different kind of symbol

    - by Matt
    Hello all. I am writing a program that is supposed to help me learn about enumeration data types in C++. The current trouble is that the compiler doesn't like my enum usage when trying to use the new data type as I would other data types. I am getting the error "redeclared as different kind of symbol" when compiling my trangleShape function. Take a look at the relevant code. Any insight is appreciated! Thanks! (All functions are their own .cpp files.) header file #ifndef HEADER_H_INCLUDED #define HEADER_H_INCLUDED #include <iostream> #include <iomanip> using namespace std; enum triangleType {noTriangle, scalene, isoceles, equilateral}; //prototypes void extern input(float&, float&, float&); triangleType extern triangleShape(float, float, float); /*void extern output (float, float, float);*/ void extern myLabel(const char *, const char *); #endif // HEADER_H_INCLUDED main function //8.1 main // this progam... #include "header.h" int main() { float sideLength1, sideLength2, sideLength3; char response; do //main loop { input (sideLength1, sideLength2, sideLength3); triangleShape (sideLength1, sideLength2, sideLength3); //output (sideLength1, sideLength2, sideLength3); cout << "\nAny more triangles to analyze? (y,n) "; cin >> response; } while (response == 'Y' || response == 'y'); myLabel ("8.1", "2/11/2011"); return 0; } triangleShape shape # include "header.h" triangleType triangleShape(sideLenght1, sideLength2, sideLength3) { triangleType triangle; return triangle; }

    Read the article

  • MySQL " identify storage engine statement"

    - by sammysmall
    This IS NOT a Homework question! While building my current student database project I realized that I may want to identify comprehensive information about a database design in the future. More-so if I am fortunate enough to get a job in this field and were handed a database project how could I break down certain elements for identification... In all of my previous designs I have been using MySQL Community Server (GPL) 5.1.42, I thought (duh) that I was using the MyISAM based on most of my text-book instruction and MySQL 5.0 Reference Manual :: 13 Storage Engines :: 13.1 The MyISAM Storage Engine I determined that this was in fact incorrect for this version and the use of "SHOW ENGINES" at the console... No problem, figured out why they have "versions" the need to pay attention to what version is being used, and the need for a means to determine what I am about to mess up "if" I do not pay attention to detail... Q1. Specifically what statement will identify the version used by someone elses initial database creation? (since I created my own databases I know what version I used) Q2. Specifically what statement will identify the storage engine that the developer used when creating the database. (I specified a particular database in my collection then tried SHOW Engine, did not work, then tried to just get the metadata from one table in that database: mysql SELECT duck_cust, table_type, engine - FROM INFORMATION_SCHEMA.tables - WHERE table_schema = 'tp' - ORDER BY table_type ASC, table_name DESC; as this was not really what I wanted (and did not work) I am looking for some direction from the pros... Q3. (If you really have the inclination to continue helping) If I were to access a database from an earlier/later "version" are there backward/forward compatibility issues for maintaining/updating data between versions? Please and Thank you in advance for your time and efforts! sammysmall

    Read the article

  • Issue with Callback method and maintaining CultureInfo and ASP.Net HttpRuntime

    - by Little Larry Sellers
    Hi All, Here is my issue. I am working on an E-commerce solution that is deployed to multiple European countries. We persist all exceptions within the application to SQL Server and I have found that there are records in the DB that have a DateTime in the future! We define the culture in the web.config, for example pt-PT, and the format expected is DD-MM-YYYY. After debugging I found the issue with these 'future' records in the DB is because of Callback methods we use. For example, in our Caching architecture we use Callbacks, as such - CacheItemRemovedCallback ReloadCallBack = new CacheItemRemovedCallback(OnRefreshRequest); When I check the current threads CultureInfo, on these Callbacks it is en-US instead of pt-PT and also the HttpContext is null. If an exception occurs on the Callback our exception manager reports it as MM-DD-YYYY and thus it is persisted to SQL Server incorrectly. Unfortunately, in the exception manager code, we use DateTime.Now, which is fine if it is not a callback. I can't change this code to be culture specific due to it being shared across other verticals. So, why don't callbacks into ASP.Net maintain context? Is there any way to maintain it on this callback thread? What are the best practices here? Thanks.

    Read the article

  • Producing a static HTML site from XML content

    - by Skilldrick
    I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here: I'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Learn some basic XSLT and produce HTML to the correct spec this way. Produce the site with PHP (for example) and then generate a static site. Write a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT. Useful information: The XML will likely change at some point, so I'd like to be able to easily regenerate the site. I'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content). I'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.

    Read the article

  • Easiest way to remove Keys from a 2D Array?

    - by dbemerlin
    Hi, I have an Array that looks like this: array( 0 => array( 'key1' => 'a', 'key2' => 'b', 'key3' => 'c' ), 1 => array( 'key1' => 'c', 'key2' => 'b', 'key3' => 'a' ), ... ) I need a function to get an array containing just a (variable) number of keys, i.e. reduce_array(array('key1', 'key3')); should return: array( 0 => array( 'key1' => 'a', 'key3' => 'c' ), 1 => array( 'key1' => 'c', 'key3' => 'a' ), ... ) What is the easiest way to do this? If possible without any additional helper function like array_filter or array_map as my coworkers already complain about me using too many functions. The source array will always have the given keys so it's not required to check for existance. Bonus points if the values are unique (the keys will always be related to each other, meaning that if key1 has value a then the other key(s) will always have value b). My current solution which works but is quite clumsy (even the name is horrible but can't find a better one): function get_unique_values_from_array_by_keys(array $array, array $keys) { $result = array(); $found = array(); if (count($keys) > 0) { foreach ($array as $item) { if (in_array($item[$keys[0]], $found)) continue; array_push($found, $item[$keys[0]]); $result_item = array(); foreach ($keys as $key) { $result_item[$key] = $item[$key]; } array_push($result, $result_item); } } return $result; } Addition: PHP Version is 5.1.6.

    Read the article

  • Using child visitor in C#

    - by Thomas Matthews
    I am setting up a testing component and trying to keep it generic. I want to use a generic Visitor class, but not sure about using descendant classes. Example: public interface Interface_Test_Case { void execute(); void accept(Interface_Test_Visitor v); } public interface Interface_Test_Visitor { void visit(Interface_Test_Case tc); } public interface Interface_Read_Test_Case : Interface_Test_Case { uint read_value(); } public class USB_Read_Test : Interface_Read_Test_Case { void execute() { Console.WriteLine("Executing USB Read Test Case."); } void accept(Interface_Test_Visitor v) { Console.WriteLine("Accepting visitor."); } uint read_value() { Console.WriteLine("Reading value from USB"); return 0; } } public class USB_Read_Visitor : Interface_Test_Visitor { void visit(Interface_Test_Case tc) { Console.WriteLine("Not supported Test Case."); } void visit(Interface_Read_Test_Case rtc) { Console.WriteLine("Not supported Read Test Case."); } void visit(USB_Read_Test urt) { Console.WriteLine("Yay, visiting USB Read Test case."); } } // Code fragment USB_Read_Test test_case; USB_Read_Visitor visitor; test_case.accept(visitor); What are the rules the C# compiler uses to determine which of the methods in USB_Read_Visitor will be executed by the code fragment? I'm trying to factor out dependencies of my testing component. Unfortunately, my current Visitor class contains visit methods for classes not related to the testing component. Am I trying to achieve the impossible?

    Read the article

  • Using shared_ptr to implement RCU (read-copy-update)?

    - by yongsun
    I'm very interested in the user-space RCU (read-copy-update), and trying to simulate one via tr1::shared_ptr, here is the code, while I'm really a newbie in concurrent programming, would some experts help me to review? The basic idea is, reader calls get_reading_copy() to gain the pointer of current protected data (let's say it's generation one, or G1). writer calls get_updating_copy() to gain a copy of the G1 (let's say it's G2), and only one writer is allowed to enter the critical section. After the updating is done, writer calls update() to do a swap, and make the m_data_ptr pointing to data G2. The ongoing readers and the writer now hold the shared_ptr of G1, and either a reader or a writer will eventually deallocate the G1 data. Any new readers would get the pointer to G2, and a new writer would get the copy of G2 (let's say G3). It's possible the G1 is not released yet, so multiple generations of data my co-exists. template <typename T> class rcu_protected { public: typedef T type; typedef std::tr1::shared_ptr<type> rcu_pointer; rcu_protected() : m_data_ptr (new type()) {} rcu_pointer get_reading_copy () { spin_until_eq (m_is_swapping, 0); return m_data_ptr; } rcu_pointer get_updating_copy () { spin_until_eq (m_is_swapping, 0); while (!CAS (m_is_writing, 0, 1)) {/* do sleep for back-off when exceeding maximum retry times */} rcu_pointer new_data_ptr(new type(*m_data_ptr)); // as spin_until_eq does not have memory barrier protection, // we need to place a read barrier to protect the loading of // new_data_ptr not to be re-ordered before its construction _ReadBarrier(); return new_data_ptr; } void update (rcu_pointer new_data_ptr) { while (!CAS (m_is_swapping, 0, 1)) {} m_data_ptr.swap (new_data_ptr); // as spin_until_eq does not have memory barrier protection, // we need to place a write barrier to protect the assignments of // m_is_writing/m_is_swapping be re-ordered bofore the swapping _WriteBarrier(); m_is_writing = 0; m_is_swapping = 0; } private: volatile long m_is_writing; volatile long m_is_swapping; rcu_pointer m_data_ptr; };

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • django dynamically deduce SITE_ID according to the domain

    - by dcrodjer
    I am trying to develop a site which will render multiple customized sites according to the domain name (subdomain to be more precise). My all the domain names are redirected to the So for each site there will be a corresponding model which defines how the site should look (SITE - SITE_SETTINGS) What will be the best way to utilize the django sites framework to get the SITE_ID of the current site from the domain name instead of hard-coding it in the settings files (django sites documentation) and run database queries, render the views accordingly? If using multiple settings file is my only option can this (wsgi script handle domain name) be done? Update So finally, following lukes answer, what I will do is define a custom middleware which makes the views available with the important vars required according to the domain. And as far as sitemaps and comments is concerned, I will have to customize sitemaps app and a custom sites model on which the other models of sites will be based. And since the comments system is based on the hard-coded sitemap ID I can use it just as is on the models (models will already be filtered according to the site based on my sites framework) though the permalink feature will have to be customized. So a lot of customization. Please suggest if I am going wrong anywhere in this because I have to ensure that the features of the project are optimized. Thanks!

    Read the article

  • c programming malloc question

    - by user535256
    Hello guys, Just got query regarding c malloc() function. I am read()ing x number of bytes from a file to get lenght of filename, like ' read(file, &namelen, sizeof(unsigned char)); ' . The variable namelen is a type unsigned char and was written into file as that type (1 byte). Now namelen has the lenght of filename ie namelen=8 if file name was 'data.txt', plus extra /0 at end, that working fine. Now I have a structure recording file info, ie filename, filelenght, content size etc. struct fileinfo { char *name; ...... other variable like size etc }; struct fileinfo *files; Question: I want to make that files.name variable the size of namelen ie 8 so I can successfully write the filename into it, like ' files[i].name = malloc(namelen) ' However, I dont want it to be malloc(sizeof(namelen)) as that would make it file.name[1] as the size of its type unsigned char. I want it to be the value thats stored inside variable &namelen ie 8 so file.name[8] so data.txt can be read() from file as 8 bytes and written straight into file.name[8? Is there a way to do this my current code is this and returns 4 not 8 files[i].name = malloc(namelen); //strlen(files[i].name) - returns 4 //perhaps something like malloc(sizeof(&namelen)) but does not work Thanks for any suggestions Have tried suggested suggestions guys, but I now get a segmentation fault error using: printf("\nsizeofnamelen=%x\n",namelen); //gives 8 for data.txt files[i].name = malloc(namelen + 1); read(file, &files[i].name, namelen); int len=strlen(files[i].name); printf("\nnamelen=%d",len); printf("\nname=%s\n",files[i].name); When I try to open() file with that files[i].name variable it wont open so the data does not appear to be getting written inside the read() &files[i].name and strlen() causes segemntation error as well as trying to print the filename

    Read the article

  • Conditional execution of EventTriggers in Silverlight 3

    - by Jason
    I'm currently working on the UI of a Silverlight application and need to be able to change the visual state of a control to one of two possible states based on it's current state when handling the same event trigger. For example: I have a control that sits partially in a clipping path, when I click the visible part of the control I want to change the state to "visible" and if I click it again when it is in its "visible" state I want to change to the "hidden" state. Example XAML: <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonUp"> <ic:GoToStateAction StateName="Visible"/> <ic:GoToStateAction StateName="Hidden"/> </i:EventTrigger> </i:Interaction.Triggers> Where "i" is "System.Windows.Interactivity;assembly=System.Windows.Interactivity" and "ic" is "Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions". I'm currently working in Expression Blend 3 and would prefer to have a XAML only solution but am not opposed to coding this if it is completely necessary. I have tried recording a change in the target state name in Blend but this did not work. Any thoughts on this?

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • Convert a Dynamic[] construct to a numerical list

    - by Leo Alekseyev
    I have been trying to put together something that allows me to extract points from a ListPlot in order to use them in further computations. My current approach is to select points with a Locator[]. This works fine for displaying points, but I cannot figure out how to extract numerical values from a construct with head Dynamic[]. Below is a self-contained example. By dragging the gray locator, you should be able to select points (indicated by the pink locator and stored in q, a list of two elements). This is the second line below the plot. Now I would like to pass q[[2]] to a function, or perhaps simply display it. However, Mathematica treats q as a single entity with head Dynamic, and thus taking the second part is impossible (hence the error message). Can anyone shed light on how to convert q into a regular list? EuclideanDistanceMod[p1_List, p2_List, fac_: {1, 1}] /; Length[p1] == Length[p2] := Plus @@ (fac.MapThread[Abs[#1 - #2]^2 &, {p1, p2}]) // Sqrt; test1 = {{1.`, 6.340196001221532`}, {1.`, 13.78779876355869`}, {1.045`, 6.2634018978377295`}, {1.045`, 13.754947081416544`}, {1.09`, 6.178367702583522`}, {1.09`, 13.72055251752498`}, {1.135`, 1.8183153704413153`}, {1.135`, 6.082497198000075`}, {1.135`, 13.684582525399742`}, {1.18`, 1.6809452373465104`}, {1.18`, 5.971583107298081`}, {1.18`, 13.646996905469383`}, {1.225`, 1.9480537697339537`}, {1.225`, 5.838386922625636`}, {1.225`, 13.607746407088161`}, {1.27`, 2.1183174369679234`}, {1.27`, 5.669799095595362`}, {1.27`, 13.566771130126131`}, {1.315`, 2.2572975468163463`}, {1.315`, 5.444014254828522`}, {1.315`, 13.523998701347882`}, {1.36`, 2.380307009155079`}, {1.36`, 5.153024664297602`}, {1.36`, 13.479342200528283`}, {1.405`, 2.4941312539733285`}, {1.405`, 4.861423833512566`}, {1.405`, 13.432697814928654`}, {1.45`, 2.6028066447609426`}, {1.45`, 4.619367407525507`}, {1.45`, 13.383942212133244`}}; DynamicModule[{p = {1.2, 10}, q = {1.3, 11}}, q := Dynamic@ First@test1[[ Ordering[{#, EuclideanDistanceMod[p, #, {1, .1}]} & /@ test1, 1, #1[[2]] < #2[[2]] &]]]; Grid[{{Show[{ListPlot[test1, Frame -> True, ImageSize -> 300], Graphics@Locator[Dynamic[p]], Graphics@ Locator[q, Appearance -> {Small}, Background -> Pink]}]}, {Dynamic@p}, {q},{q[[2]]}}]]

    Read the article

  • Have Microsoft changed how ASP.NET MVC deals with duplicate action method names?

    - by Jason Evans
    I might be missing something here, but in ASP.NET MVC 4, I can't get the following to work. Given the following controller: public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(string order1, string order2) { return null; } } and it's view: @{ ViewBag.Title = "Home"; } @using (Html.BeginForm()) { @Html.TextBox("order1")<br /> @Html.TextBox("order2") <input type="submit" value="Save"/> } When start the app, all I get is this: The current request for action 'Index' on controller type 'HomeController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type ViewData.Controllers.HomeController System.Web.Mvc.ActionResult Index(System.String, System.String) on type ViewData.Controllers.HomeController Now, in ASP.NET MVC 3 the above works fine, I just tried it, so what's changed in ASP.NET MVC 4 to break this? OK there could be a chance that I'm doing something silly here, and not noticing it. EDIT: I notice that in the MVC 4 app, the Global.asax.cs file did not contain this: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } which the MVC 3 app does, by default. So I added the above to the MVC 4 app but it fails with the same error. Note that the MVC 3 app does work fine with the above route. I'm passing the "order" data via the Request.Form. EDIT: In the file RouteConfig.cs I can see RegisterRoutes is executed, with the following default route: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }); I still get the original error, regards ambiguity between which Index() method to call.

    Read the article

  • HTML, CSS: overbar matching square root symbol

    - by Pindatjuh
    Is there a way in HTML and/or CSS to do the following, but then correctly: √¯¯¯¯¯¯φ·(2π−γ) Such that there is an overbar above the expression, which neatly aligns with the &radic;? I know there is the Unicode &macr;, that looks like the overbar I need (as used in the above example, though as you can see – it doesn't align well with the root symbol). The solution I'm looking for works at least for one standard font, on most sizes, and all modern browsers. I can't use images; I'd like to have a pure HTML4/CSS way, without client scripting. Here is my current code, thank you Matthew Jones (+1) for the text-decoration: overline! Still some problems <div style="font-family: Georgia; font-size: 200%"> <span style="vertical-align: -15%;">&radic;</span><span style="text-decoration: overline;">&nbsp;x&nbsp;+&nbsp;1&nbsp;</span> </div> The line doesn't match the &radic; because I lowered it with 15% baseline height. (Because the default placement is not nice) The line thickness doesn't match the thickness of the &radic;. Thanks!

    Read the article

  • jQuery/JavaScript Date form validation

    - by Victor Jackson
    I am using the jQuery date picker calendar in a form. Once submitted the form passes params along via the url to a third party site. Everything works fine, except for one thing. If the value inserted into the date field by the datepicker calendar is subsequently deleted, or if the default date, that is in the form on page load, is deleted and the form is submitted I get the following error: "Conversion from string "" to type 'Date' is not valid." The solution to my problem is really simple, I want to validate the text field where the date is submitted and send out a default date (current date) if the field is empty for any reason. The problem is I am terrible at Javascript and cannot figure out how to do this. Here is the form code for my date field. [var('default_date' = date)] <input type="text" id="datepicker" name="txtdate" value="[$default_date]" onfocus="if (this.value == '[$default_date]') this.value = '';" onchange="form.BeginDate.value = this.value; form.EndDate.value = this.value;" /> <input type="hidden" name="BeginDate" value="[$default_date]"/> <input type="hidden" name="EndDate" value="[$default_date]"/>

    Read the article

  • How to write automated tests for SQL queries?

    - by James
    The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits. The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful. I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct. I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve. I am happy to add more information if required, add a comment if necessary. Thank you. Edit: I am using c#

    Read the article

  • PHP classes totally forgotten something today - sorry

    - by russp
    Hi guys, really sorry about being "totally thick today" but I have forgotten how to do something simple - too much time not in php recently. Want to use the OS phpapi How do I print out the individual rows - see told you I was being thick today // The fields we will be fetching. if (isset($_GET['test']) && $_GET['test'] == 'plaxo') { // plaxo is a PortableContacts end-point so doesn't know about the OpenSocial specific fields $profile_fields = array(); } else { $profile_fields = array( 'aboutMe', 'displayName', 'bodyType', 'currentLocation', 'drinker', 'happiestWhen', 'lookingFor' ); } // The number of friends to fetch. $friend_count = 2; $batch = $osapi->newBatch(); // Fetch the current user. $self_request_params = array( 'userId' => $userId, // Person we are fetching. 'groupId' => '@self', // @self for one person. 'fields' => $profile_fields // Which profile fields to request. ); $batch->add($osapi->people->get($self_request_params), 'self'); // Fetch the friends of the user $friends_request_params = array( 'userId' => $userId, // Person whose friends we are fetching. 'groupId' => '@friends', // @friends for the Friends group. 'fields' => $profile_fields, // Which profile fields to request. 'count' => $friend_count // Max friends to fetch. ); $batch->add($osapi->people->get($friends_request_params), 'friends'); // Get supportedFields Request $batch->add($osapi->people->getSupportedFields(), 'supportedFields'); // Send the batch request. $result = $batch->execute(); Say I wanted to print out "aboutMe", whats the echo? cos echo $result['aboutMe'] doesn't work.

    Read the article

  • passenger won't spawn more than 6 instances despite passenger_max_pool_size = 30

    - by mrD
    I have some problems with passenger + nginx and hope someone might be able help me and direct me in the right direction. I've set the passenger_max_pool_size to 30 but passenger never spawns more than 6 instances. I'm loading a webpage that uses ajax to load 30 sub pages from the server but because passenger only spawns 6 instances they are queued. What makes me confused is that Waiting on global queue is 0 but I can see in my browser that everything gets queued. When the first 6 ajax requests are done the next 6 starts loading. What am I missing? :) This is the output from passenger-status (I had about 24 requests in the browser waiting for response from the server when I checked this status) ----------- General information ----------- max = 30 count = 6 active = 6 inactive = 0 Waiting on global queue: 0 ----------- Domains ----------- /srv/rails/production/current: PID: 28428 Sessions: 1 Processed: 42 Uptime: 5m 43s PID: 28424 Sessions: 1 Processed: 23 Uptime: 5m 43s PID: 28422 Sessions: 1 Processed: 7 Uptime: 5m 43s PID: 28420 Sessions: 1 Processed: 22 Uptime: 6m 0s PID: 28426 Sessions: 1 Processed: 39 Uptime: 5m 43s PID: 28430 Sessions: 1 Processed: 7 Uptime: 5m 43s These are my passenger related settings in nginx.conf http { passenger_root /opt/ruby/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /opt/ruby/bin/ruby; passenger_max_pool_size 30;

    Read the article

  • Is sending a hashed password over the wire a security hole?

    - by Ubiquitous Che
    I've come across a system that is in use by a company that we are considering partnering with on a medium-sized (for us, not them) project. They have a web service that we will need to integrate with. My current understanding of proper username/password management is that the username may be stored as plaintext in the database. Every user should have a unique pseudo-random salt, which may also be stored in plaintext. The text of their password must be concatenated with the salt and then this combined string may be hashed and stored in the database in an nvarchar field. So long as passwords are submitted to the website (or web service) over plaintext, everything should be just lovely. Feel free to rip into my understanding as summarized above if I'm wrong. Anyway, back to the subject at hand. The WebService run by this potential partner doesn't accept username and password, which I had anticipated. Instead, it accepts two string fields named 'Username' and 'PasswordHash'. The 'PasswordHash' value that I have been given does indeed look like a hash, and not just a value for a mis-named password field. This is raising a red flag for me. I'm not sure why, but I feel uncomfortable sending a hashed password over the wire for some reason. Off the top of my head I can't think of a reason why this would be a bad thing... Technically, the hash is available on the database anyway. But it's making me nervous, and I'm not sure if there's a reason for this or if I'm just being paranoid.

    Read the article

  • Visual Studio / Blend... how you organize that?

    - by TomTom
    Virst time more complex stuff in WPF. I am a little lost on the split betwen VS and Blend. It seems I am VERY limited with editors in Visual Studio for editing controls - when customizing, for example, it seems I Can enter astyle in XML... but in blend I Can tell it to make a copy of the CURRENT style and use that as a starter, definitely more convenient. I understand the "difference in focus", but it seems to me that i Really need both tools to work, especially if the controls I Do are: More complex Not user controls "on purpose" (to allow more customization by programmes using the application). THis means when I do a control, my approach would be: Work on the backend as good as it gets without front end (i.e. implement all methods needed etc., but can be dummies) Switch over to Blend (closing visual studio - as the projcet must be closed) Put in the initial templating Switch over to VIsual Studio (closing blend) Put logic in and debug. This seems pretty counterintuitively. Am I missing something obvious here?

    Read the article

  • Java sockets: multiple client threads on same port on same machine?

    - by espcorrupt
    I am new to Socket programming in Java and was trying to understand if the below code is not a wrong thing to do. My question is: Can I have multiple clients on each thread trying to connect to a server instance in the same program and expect the server to read and write data with isolation between clients" public class Client extends Thread { ... void run() { Socket socket = new Socket("localhost", 1234); doIO(socket); } } public class Server extends Thread { ... void run() { // serverSocket on "localhost", 1234 Socket clientSock = serverSocket.accept(); executor.execute(new ClientWorker(clientSock)); } } Now can I have multiple Client instances on different threads trying to connect on the same port of the current machine? For example, Server s = new Server("localhost", 1234); s.start(); Client[] c = new Client[10]; for (int i = 0; i < c.length; ++i) { c.start(); }

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >