Search Results

Search found 3246 results on 130 pages for 'planning'.

Page 113/130 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Copying a directory that is version controlled

    - by ibz
    I am curious whether it is OK to copy a directory that is under version control and start working on both copies. I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases. I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy. However, if we talk about the DVCS world, things might be even more unclear, since every working copy is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question. Later edit: Some people asked why I would want to do that. Here is the whole story: In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened. In the bzr case, I am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though.

    Read the article

  • Why is Lua considered a game language?

    - by Hoffmann
    I have been learning about Lua in the past month and I'm absolutely in love with the language, but all I see around that is built with lua are games. I mean, the syntax is very simple, there is no fuss, no special meaning characters that makes code look like regex, has all the good things about a script language and integrates so painlessly with other languages like C, Java, etc. The only down-side I saw so far is the prototype based object orientation that some people do not like (or lack of OO built-in). I do not see how ruby or python are better, surely not in performance ( http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=lua&lang2=python ). I was planning on writting a web app using lua with the Kepler framework and Javascript, but the lack of other projects that use lua as a web language makes me feel a bit uneasy since this is my first try with web development. Lua is considered a kids language, most of you on stackoverflow probably only know the language because of the WoW addons. I can't really see why that is... http://lua-users.org/wiki/LuaVersusPython this link provides some insights on Lua against Python, but this is clearly biased.

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • Linking problems using libcurl with Visual C++ 2005: "unresolved external symbol __imp__curl_easy_se

    - by user88595
    Hi, I am planning to use libcurl in my project. I had downloaded the library source,built and integrated it in a small POC application. I am able to build and run the application without any issues with the generated libcurl.dll and libcurl_imp.lib files. Now when I integrate the same library in my project I am getting linker errors. 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_setopt 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_perform 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_cleanup 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_global_init 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_init I have researched and tried all manners of workarounds like adding CURL_STATICLIB definitions , additional libraries , changing to /MT even copying the libs to the release directory but nothing seems to work. As far as I can see the only difference between approach #1 and #2 in my steps are #1 is an console application using the libcurl.dll while in my main project this is another dll which is trying to link to libcurl.dll.. Would that necessitate any change in approach? Can I use the same generated multi threaded DLL /MD file for both(Tried /MT also with no success)? Any other ideas? Following are the linker options. -------------------------------------------------Working------------------------------------------------- /OUT:"C:\SampleFTP\Release\SampleFTP.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\SampleFTP\SampleFTP\Release" /MANIFEST /MANIFESTFILE:"Release\SampleFTP.exe.intermediate.manifest" /DEBUG /PDB:"c:\SampleFTP\release\SampleFTP.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG /MACHINE:X86 /ERRORREPORT:PROMPT libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib -------------------------------------------------Working------------------------------------------------- ----------------------------------------------NotWorking------------------------------------------------- /OUT:".......\nt\Win32\Release/foo__tests.dll" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\FullLibPath\libcurl_libs" /LIBPATH:"......\nt\Win32\Release" /DLL /MANIFEST /MANIFESTFILE:".\foo_tests\Win32\Release\foo_tests.dll.intermediate.manifest" /DEBUG /PDB:".......\nt\Win32\Release/foo_tests.pdb" /OPT:REF /OPT:ICF /LTCG /IMPLIB:".......\nt\Win32\Release/foo_tests.lib" /MACHINE:X86 /ERRORREPORT:PROMPT odbc32.lib odbccp32.lib util_process.lib wsock32.lib Version.lib libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "......\nt\win32\release\otherlib1.lib" "......\nt\win32\release\otherlib2.lib" ----------------------------------------------NotWorking-------------------------------------------------

    Read the article

  • .NET - getting form field key/value pairs?

    - by AverageJoe719
    Hi there, I've got a form with textboxes and I want to run some server side validation code on the values after the form is submitted. I was planning to grab all the textbox controls on the page and adding them to a list, then running a for each loop that says for each control in the list query the database where fieldValidation.Name = Control.Name. This would return to me an object and an associated function where i coudl then input the Control.Value and actually perform the validation. My friend told me building the list is not necessary because all languages have a way to get key/value pairs from forms (he doesn't know .NET so coudln't help me). I may be searching the wrong term here but I have not been able to find a result, or maybe I misunderstood my friend. Is there some kind of dictionary or key/value pair automatically generated upon form submissions that contains the value that was submitted and...I guess also the control? Or am I just misunderstanding him. If he was just saying populate a key/value pair based on the form submissions, how does that help me in this situation over a list that contains the control? Thanks =)

    Read the article

  • Good mobile oriented GWT widget library alternatives

    - by Michael Donohue
    I've been developing a travel planning site - tripgrep.com - which is built on appengine, GWT and smartgwt, among other technologies. It is still early days, and the site is now working well on my development environment, which is either a windows or mac computer. However, I am frequently talking up the website to my friends when we are at a bar or other venue, so I am standing there while they try to access the site via an iPhone, Android or Blackberry - I've witnessed all three. It has been painfully obvious that the browser based frontend takes a long time to download on a mobile device. I am pretty sure this is because of the javascript download for SmartGWT. So, I would like to look at alternatives to SmartGWT. What I like about SmartGWT is that it has a reasonable look and feel out of the box - I don't need to learn any design or css and it has an office application look. This is considerably better than the GWT built-in widgets, which just get a blue border. The better look-and-feel is why I went with SmartGWT early on. However, the slow load times are killing me on these mobile demos. So now I want a fast loading widget alternative that has good look-and-feel out of the box. The features I care about are: tabs, good form layout, Google maps API integration, grid data viewing. If those are all available in a library that loads quickly on a mobile device, then that's the library I want.

    Read the article

  • protecting grails melody with grails filter

    - by batmannavneet
    I have an application where I am using spring security along with grails melody. I am planning to run grails melody in production environment, but don't want visitors to have access to it. How should I achieve that ? I tried creating a filter in grails (just showing the sample of what I am trying, not the actual code)- def filters = { allURIs(uri:'/**') { before = { //... if(request.forwardURI.indexOf("admin") != -1 || request.forwardURI.indexOf("monitoring") != -1) { response.sendError 404 return false } } } } But this doesnt work as the request for "monitoring" doesnt hit this filter. I dont even want the user to know that such a URL exists, so I want to check in the filter that if "monitoring" is the URL, I show the 404 error page. Thats also the reason why I dont want to protect this URL with spring security as it will show "access denied" page. Basically I want the URL to exist but they should be invisible to users. I want the access to be open to only certain IP addresses for these special URLs. On another note, Is it possible to write a grails filter that "acts" before the spring security filter is hit ? I want to be able to do some filtering before I forward requests to spring security. Writing a grails filter like above doesnt help. Spring security filter gets hit first if I access a protected resource and this filter doesn't get called. Thanks

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • running .net application over a network

    - by Marlon
    Hello, I need some advice please. I need to enable a .Net application to run over a network share, the problem is that this will be on clients network shares and so the path will not be identical. I've had a quick look at ClickOnce and the publish options in VS2008 but it wants a specific network share location - and I'm assuming this location gets stored somewhere when it does its thing. At the moment the job is being done with a old VB6 application and so gets around all these security issues, but that application is poorly written and almost impossible to maintain so it really needs to go. Is it possible for the domain controller to be set up to allow this specific .Net application to execute? Any other options would be welcomed as I want to get this little application is very business critical. I aught to say that the client networks are schools, and thus are often quite locked down as are the client machines, so manually adding exceptions to each client machine is a big no no. Marlon --Edit-- Apologies, I forgot to mention we're restricted to .net 2.0 for the moment, we are planning to upgrade this to 4.0 but that won't be immediate.

    Read the article

  • Can I have two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain?

    - by Hamman359
    Is it possible to setup two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain? i.e. both point to different pages within www.somesite.com. Here's some background on the application and why I'm asking. This is a brownfield application that is currently 2.0 WebForms and is full of WebFormy 'goodness' (i.e. ObjectDataSources, FormView controls, UpdatePanels, etc...) There are lost of other 'fun' things in the code base like 600+ Stored Procedures and 200+ line methods in the business layer code that get data from the DB via stored proc, do some processing on the data, build an HTML string using string concatenation and then return that string to the UI layer. What we are planning on doing is developing new features in MVC and slowly converting the existing features over to MVC one at a time. As part of this transition, we will also be re-writing the layers below the UI to clean up the mess there and to do things like replace the stored procedures with NHibernate and introduce an IOC container. I know that you can run WebForms and MVC side-by-side in the same project, however, because we will be making wholesale changes to the way we do many things throughout our entire development stack, I'd like the new stuff to be a completely separate project within the solution. This should help serve as very visual reminder that this is a different way of doing things than before and make it easier to remove the old code as it is no longer needed. What I don't know is, is this even possible? Can two separate project point to the same domain? Here's an quick example of what I'm thinking: www.somesite.com/orders.aspx?id=123 (Orders page from existing WebForms project) www.somesite.com/customer/987 (Customer page from new MVC project)

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • UI Design, incase of numerous situations

    - by The King
    Hi... I'm creating a web form, where in there are around 12-15 Input Fields... You can have a look at the screen and The request is such that depending on the data the user selects in the Gridview and the DropDown list, the appropriate Textboxes and CheckBoxes needs to be displayed. Some times the conditions are very direct, like when the DDL value is "ABC", get only paid amount from the user. Sometime they are so complex like... IF DDL is "DEF" and Selected GPMS value is between 1000-2000, calculate the values of allowed, paid etc (using some formula) and the focus should be directed to Page No Field, leaving the other fields open incase user wants to edit... There are around 10-15 conditions like this. As this was done through agile, conditions were being added as and when, and wherever it feels appropriate (DDL on change Event, GridView on selecting change event etc... etc..) After completion, now I see the code has become a big chuck, is growing unmanageably... Now, I'm planning to clear this... From you experience, what you think is the best way to handle this. There is a possibility to add more conditions like this in future... Please let me know, incase you need more information. I'm currently developing this app in C# .Net WindowsForms Edit: Currently there are only three items (The Datagrid, the DDL, the OverrideAmt CheckBox) that change the way other fields behave... Almost all of the conditions will fall between the two situations I mentioned... Mostly they belong to "Enabling/Disabling".. "Setting of Values"... and "Changing Focus" or any combination of these.

    Read the article

  • dynamically created controls and responding to control events

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • playing a dvd using C#

    - by user203212
    Hi, I have a dvd copied in my hard drive. It has a folder called video_ts. I am planning to run VLC player to play it. i was wondering how can I play this dvd using c#. I do not want to use a activex control inside c#. All I need to do is run the vlc.exe using process. I have already done that. But how do I select a specific file from the code that will start playig in the vlc player. My code is. System.Diagnostics.Process Proc = new System.Diagnostics.Process(); Proc.StartInfo.FileName = @"C:\Program Files\VideoLAN\VLC\vlc.exe"; Proc.StartInfo.Arguments = @"C:\Test\Legacy\VIDEO_TS\VIDEO_TS.BUP"; Proc.Start(); I am trying to send the file name as a argument to run it in the vlc.exe . But its not working. Its just opening up the vlc player. I dont want the user to select the file manually.

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Google Maps Terms of Service - saving some data to a database

    - by R.M.
    I've read the terms of service, and, from what I understand, I'm not allowed to store any information I retrieve from the Google Maps API. Are there any exceptions to this? More to the point, I'm planning on building an application that shows the user several points of interest (like restaurants, libraries etc) at a certain distance around a location he chooses (it can be in one city or more, depending on the distance he chooses). There are two problems: The first problem is that (at least for my country) the geocoder doesn't locate exact addresses, at best it only locates street names (but completely ignores street numbers) in larger cities. It is even worse for smaller rural areas. So the only way to accurately show the places on the map is by storing their coordinates in the database. Another problem seems to be with calculating distances. To show the points located below a certain distance from the user, it would mean I would have to use GDirections to get all distances between the user's location and the other points, to see which ones to show. That would be really slow for the user (since I also have to set a small delay between requests), and it would also send a pretty large amount of requests to google. Would I be allowed to store those distances in a database? The users would not be able to access a list of all the stored information, they would only see the names of the places, and a map with some markers on it. Thank you.

    Read the article

  • haml - if-else with different identations

    - by egarcia
    Hi everyone, I'm trying to render a calendar with rails and haml. The dates used come from a variable called @dates. It is a Date range that contains the first and last days to be presented on the calendar. The first day is always sunday and the last one is always monday. I'm planning to render a typical calendar, with one column per weekday (sunday is going to be the first day of the week) using an html table. So, I need to put a %tr followed by a %td on sundays, but the rest of the days I just need a %td. I'm having trouble modelling that on haml. This seems to require different levels of identation, and that's something it doesn't like. Here's my failed attempt: %table %tr %th= t('date.day_names')[0] # Sunday %th= t('date.day_names')[1] %th= t('date.day_names')[2] %th= t('date.day_names')[3] %th= t('date.day_names')[4] %th= t('date.day_names')[5] %th= t('date.day_names')[6] # Monday [email protected] do |date| - if(date.wday == 0) # if date is sunday %tr %td=date.to_s - else %td=date.to_s This doesn't work the way I want. The %tds for the non-sunday days appear outside of the %tr: <tr> <td>2010-04-24</td> </tr> <td>2010-04-25</td> <td>2010-04-26</td> <td>2010-04-27</td> <td>2010-04-28</td> <td>2010-04-29</td> <td>2010-04-30</td> I tried adding two more spaces to the else but then haml complained about improper identation. What's the best way to do this? Note: I'm not interested on rendering the calendar using unordered lists. Please don't suggest that.

    Read the article

  • Why can't I wrap the ServletRequest when trying to capture JSP Output

    - by Patrick Cornelissen
    I am trying to dispatch in a servlet request handler to the JSP processor and capture the content of it. I am providing wrapper instances for the ServletRequest and ServletResponse, they implement the corresponding HTTPServletRequest/-Response interfaces, so they should be drop-in replacements. All methods are currently passed to the original Servlet Request object (I am planning to modify some of them soon). Additionally I have introduced some new methods. (If you want to see the code: http://code.google.com/p/gloudy/source/browse/trunk/gloudyPortal/src/java/org/gloudy/gloudlet/impl/RenderResponseImpl.java) The HttpServletResponse uses it's own output streams to capture the output. When I try to call request.getRequestDispatcher("/WEB-INF/views/test.jsp").include(request, response); With my request and response wrappers the method returns and no content has been captured. When I tried to pass the original request object it worked! But that's not what I need in the long run... request.getRequestDispatcher("/WEB-INF/views/test.jsp").include(request.getServletRequest(), response); This works. getservletRequest() returns the original Request, given by the servlet container. Does anyone know why this is not working with my wrappers?

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • generating images with php based on css3 gradient settings?

    - by thrice801
    Hi, Does anyone know if there is a php library, or if there isnt, have any input on how one would go about generating an image via php, from basic HTML element input settings, and CSS 3 gradient parameters. To give an example on why this would be useful, I have found as of a couple days ago, that laying out the wireframe for a webpage using basic Css3 gradient styling speeds up my design and development time by, well, alot. I cant design from photoshop, Ill spend hours tweaking stupid little things that only lead to me tweaking more stupid little things to compensate for stupid little changes. -- So, I had an epiphany to stop using photoshop until the end, and just focus on the main styling of elements, text-shadow, and borders for highlights, and I feel I am able to make super clean, more focused layout by being more restricted to the basics of graphic design with just css. Anyways, so Im planning on after I have the layouts done, to then recreate the graphics in photoshop, so that every browser is able to render the images. This wont take long, but as far as repeating it goes, if I could just do it with PHP, that would be incredible. For instance, take this style for example, which renders a clean looking ipad/pod ish menu/button gradient. [code] background: -moz-linear-gradient(top, #808080, #454545 50%, #313131 51%, #333333); background: -webkit-gradient(linear, left top, left bottom, color-stop(0, #808080), color-stop(.5, #454545), color-stop(.5, #313131), to(#333333));[/code] Basically what Im looking to do is take one or more inputs that control rendering the image as you see there. Take a width and a height input, and then render the image gradient accordingly, so I can save it, upload it and then use it in my designs. So ya, I know PHP has some image generation capabilities but I dont know to what extent, any input on the most effective way to go about doing this or whether it already exists, would be appreciated!

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • Stuck on Object scope in Java

    - by ivor
    Hello, I'm working my way through an exercise to understand Java, and basically I need to merge the functionality of two classes into one app. I'm stuck on one area though - the referencing of objects across classes. What I have done is set up a gui in one class (test1), and this has a textfield in ie. chatLine = new JTextField(); in another class(test2), I was planning on leaving all the functionality in there and referencing the various gui elements set up in test1 - like this test1.chatLine I understand this level of referencing, I tested this by setting up a test method in the test2 class public static void testpass() { test1.testfield.setText("hello"); } I'm trying to understand how to implement the more complex functionality in test2 class though, specifically this existing code; test1.chatLine.addActionListener(new ActionAdapter() { public void actionPerformed(ActionEvent e) { String s = Game.chatLine.getText(); if (!s.equals("")) { appendToChatBox("OUTGOING: " + s + "\n"); Game.chatLine.selectAll(); // Send the string sendString(s); } } }); This is the bit I'm stuck on, if I should be able to do this - as it's failing on the compile, can I add the actionadapter stuff to the gui element thats sat in test1, but do this from test2 - I'm wondering if I'm trying to do something that's not possible. Hope this makes sense, I'm pretty confused over this - I'm trying to understand how the scope and referencing works. Ideally what i'm trying to achieve is one class that has all the main stuff in, the gui etc, then all the related functionality in the other class, and target the first class's gui elements with the results etc. Any thoughts greatly appreciated.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >