Search Results

Search found 913 results on 37 pages for 'massive boisson'.

Page 29/37 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How can I create an executable to run on a certain processor architecture (instead of certain OS)?

    - by CrazyJugglerDrummer
    So I take my C++ program in Visual studio, compile, and it'll spit out a nice little EXE file. But EXEs will only run on windows, and I hear a lot about how C/C++ compiles into assembly language, which is runs directly on a processor. The EXE runs with the help of windows, or I could have a program that makes an executable that runs on a mac. But aren't I compiling C++ code into assembly language, which is processor specific? My Insights: I'm guessing I'm probably not. I know there's an Intel C++ compiler, so would it make processor-specific assembly code? EXEs run on windows, so they advantage of tons of things already set up, from graphics packages to the massive .NET framework. A processor-specific executable would be literally starting from scratch, with just the instruction set of the processor. Would this executable be a file-type? We could be running windows and open it, but then would control switch to processor only? I assume this executable would be something like an operating system, in that it would have to be run before anything else was booted up, and have only the processor instruction set to "use".

    Read the article

  • C# vs C - Big performance difference

    - by John
    I'm finding massive performance differences between similar code in C anc C#. The C code is: #include <stdio.h> #include <time.h> #include <math.h> main() { int i; double root; clock_t start = clock(); for (i = 0 ; i <= 100000000; i++){ root = sqrt(i); } printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); } And the C# (console app) is: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { DateTime startTime = DateTime.Now; double root; for (int i = 0; i <= 100000000; i++) { root = Math.Sqrt(i); } TimeSpan runTime = DateTime.Now - startTime; Console.WriteLine("Time elapsed: " + Convert.ToString(runTime.TotalMilliseconds/1000)); } } } With the above code, the C# completes in 0.328125 seconds (release version) and the C takes 11.14 seconds to run. The c is being compiled to a windows executable using mingw. I've always been under the assumption that C/C++ were faster or at least comparable to C#.net. What exactly is causing the C to run over 30 times slower? EDIT: It does appear that the C# optimizer was removing the root as it wasn't being used. I changed the root assignment to root += and printed out the total at the end. I've also compiled the C using cl.exe with the /O2 flag set for max speed. The results are now: 3.75 seconds for the C 2.61 seconds for the C# The C is still taking longer, but this is acceptable

    Read the article

  • How can get dtrace to run the traced command with non-root priviledges ?

    - by Gyom
    OS X lacks linux's strace, but it has dtrace which is supposed to be so much better. However, I miss the ability to do simple tracing on individual commands. For example, on linux I can write strace -f gcc hello.c to caputre all system calls, which gives me the list of all the filenames needed by the compiler to compile my program (the excellent memoize script is built upon this trick) I want to port memoize on the mac, so I need some kind of strace. What I actually need is the list of files gcc reads and writes into, so what I need is more of a truss. Sure enough can I say dtruss -f gcc hello.c and get somewhat the same functionality, but then the compiler is run with root priviledges, which is obviously undesirable (apart from the massive security risk, one issue is that the a.out file is now owned by root :-) I then tried dtruss -f sudo -u myusername gcc hello.c, but this feels a bit wrong, and does not work anyway (I get no a.out file at all this time, not sure why) All that long story tries to motivate my original question: how do I get dtrace to run my command with normal user priviledges, just like strace does in linux ? Edit: is seems that I'm not the only one wondering how to do this: question #1204256 is pretty much the same as mine (and has the same suboptimal sudo answer :-)

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • _dl_runtime_resolve -- When do the shared objects get loaded in to memory?

    - by windfinder
    We have a message processing system with high performance demands. Recently we have noticed that the first message takes many times longer then subsequent messages. A bunch of transformation and message augmentation happens as this goes through our system, much of it done by way of external lib. I just profiled this issue (using callgrind), comparing a "run" of just one message with a "run" of many messages (providing a baseline of comparison). The main difference I see is the function "do_lookup_x" taking up a huge amount of time. Looking at the various calls to this function, they all seem to be called by the common function: _dl_runtime_resolve. Not sure what this function does, but to me this looks like the first time the various shared libraries are being used, and are then being loaded in to memory by the ld. Is this a correct assumption? That the binary will not load the shared libraries in to memory until they are being prepped for use, therefore we will see a massive slowdown on the first message, but on none of the subsequent? How do we go about avoiding this? Note: We operate on the microsecond scale.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Java OO design confusion: how to handle actions modified by states modified by actions...

    - by Arvanem
    Hi folks, Given an entity, whose action is potentially modified by states (of the entity and other entities) in turn potentially modified by other actions (of the entity and other entities) , what is the best way to code or design to handle the potential existence of the modifiers? Speaking metaphorically, I am coding a Java application representing a piano. As you know a piano has keys (which, when pressed, emit sound) and pedals (which, when pressed, modify the keys' sounds). My base class structure is as follows: Entity (for keys and pedals) State (this holds each entity's states, e.g. name such as "soft pedal", and boolean "Pressed"), Action (this holds each entity's actions, e.g. play sound when pressed, or modify others sounds). By composition, the Entity class has a copy of each of State and Action inside it. e.g.: public class Entity { State entityState = new State(); Action entityAction = new Action(); Thus I have coded a "C-Sharp" key Entity. When I "press" that entity (set its "Pressed" state to true), its action plays a "C-Sharp" sound and then sets its "Pressed" state to false. At the same time, if the "C-Sharp" key entity is not "tuned", its sound deviates from "C-Sharp". Meanwhile I have coded a "soft pedal" Entity. When that entity is "pressed", no sound plays but its action is to make softer the sound of the "C-Sharp" and other key entities. I have also coded a "sustain pedal" Entity. When that entity is "pressed", no sound plays but its action is to enable reverberation of the sound of the "C-Sharp" and other key entities. Both the "soft" and "sustain pedals" can be pressed at the same time with the result that keys entities become both softened and reverberating. In short, I do not understand how to make this simultaneous series of states and actions modify each other in a sensible OO way. I am wary of coding a massive series of "if" statements or "switches". Thanks in advance for any help or links you can offer.

    Read the article

  • backbone js or knockout js as a web framework with jquery mobile

    - by Dan
    without trying to cause a mass discussion I would like some advice from the fellow users of stack overflow. I am about to start building a mobile website that gets it data from JSON that comes from a PHP rest api. I have looked into different mobile frameworks and feel that JQM will work best for us as we have the knowledge of jQuery even though a little large. Currently at work however we are using jQuery for all our sites and realise that now we are building a mobile website I need to think about javascript frameworks to move us onto a more MV* approach, which I understand the benefits of and will bring much needed structure to this mobile site and future web applications we may bring. I have made a comparision table where I have managed to bring the selection down to 2 - backbone and knockout. I have been looking around the web and it seems that there is more support for backbone in general and maybe even more support for backbone with JQM. http://jquerymobile.com/test/docs/pages/backbone-require.html One thing I have noticed however is that backbone doesnt support view bindings (declarative approach) whereas knockout does - is this a massive bonus? one of the main reasons for using a mv* for us is to get more structure - so I would like to use the library that will intergrate best with jQuery and especially jQuery mobile. neither of them seem to have that similar syntax... Thanks

    Read the article

  • Testing a Gui-heavy WPF application.

    - by Hamish Grubijan
    We (my colleagues) have a messy 12 y.o. mature app that is GUI-based, and the current plan is to add new dialogs & other GUI in WPF, as well as replace some of the older dialogs in WPF as well. At the same time we wish to be able to test that Monster - GUI automation in a maintainable way. Some challenges: The application is massive. It constantly gains new features. It is being changed around (bug fixes, patches). It has a back end, and a layer in-between. The state of it can get out of whack if you beat it to death. What we want is: Some tool that can automate testing of WPF. auto-discovery of what the inputs and the outputs of the dialog are. An old test should still work if you add a label that does nothing. It should fail, however, if you remove a necessary text field. It would be very nice if the test suite was easy to maintain, if it ran and did not break most of the time. Every new dialog should be created with testability in mind. At this point I do not know exactly what I want, so I am marking this as a community wiki. If having to test a huge GUI-based app rings the bell (even if not in WPF), then please share your good, bad and ugly experiences here.

    Read the article

  • Backdoor in OpenBSD how is it that no developer saw it ? And what about other Linux ? [closed]

    - by user310291
    It had been revealed that there have been backdoor implanted in OpenBSD http://www.infoworld.com/d/developer-world/software-security-honesty-the-best-policy-285 OpenBSD is opensource, how is it that nobody in the community developper could see it in the source code ? So how can one trust all the other "opensource" Linux Of course OpenBSD is only a case, the point is not about OpenBSD, it is about opensource in general. my question is not about Openbsd per se it's about source code os inspection especially c/c++ since most are written in these languages. Also once the source is compiled how one can be sure that it really reflects the source code ? If a law requires that a backdoor being implanted and obliges to deny that kind of action under the guise of security, how can you be sure that the system has not been corrupted by some tools ? As said there is there is a "nondisclosure agreement" My guess is that 99.99% of developpers in the world are just incapable of understanding os source code and won't even bother to look at it. And above all nobody wonders about why the gov wants such massive backdoor, and that of course they will pressure medias to deny.

    Read the article

  • Free solution for automatic updates with .NET/C#?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past few hours I have looked into: http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use http://wyday.com/wyupdate/ - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle...

    Read the article

  • Algorithm detect repeating/similiar strings in a corpus of data -- say email subjects, in Python

    - by RizwanK
    I'm downloading a long list of my email subject lines , with the intent of finding email lists that I was a member of years ago, and would want to purge them from my Gmail account (which is getting pretty slow.) I'm specifically thinking of newsletters that often come from the same address, and repeat the product/service/group's name in the subject. I'm aware that I could search/sort by the common occurrence of items from a particular email address (and I intend to), but I'd like to correlate that data with repeating subject lines.... Now, many subject lines would fail a string match, but "Google Friends : Our latest news" "Google Friends : What we're doing today" are more similar to each other than a random subject line, as is: "Virgin Airlines has a great sale today" "Take a flight with Virgin Airlines" So -- how can I start to automagically extract trends/examples of strings that may be more similar. Approaches I've considered and discarded ('because there must be some better way'): Extracting all the possible substrings and ordering them by how often they show up, and manually selecting relevant ones Stripping off the first word or two and then count the occurrence of each sub string Comparing Levenshtein distance between entries Some sort of string similarity index ... Most of these were rejected for massive inefficiency or likelyhood of a vast amount of manual intervention required. I guess I need some sort of fuzzy string matching..? In the end, I can think of kludgy ways of doing this, but I'm looking for something more generic so I've added to my set of tools rather than special casing for this data set. After this, I'd be matching the occurring of particular subject strings with 'From' addresses - I'm not sure if there's a good way of building a data structure that represents how likely/not two messages are part of the 'same email list' or by filtering all my email subjects/from addresses into pools of likely 'related' emails and not -- but that's a problem to solve after this one. Any guidance would be appreciated.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Python - Checking for membership inside nested dict

    - by victorhooi
    heya, This is a followup questions to this one: http://stackoverflow.com/questions/2901422/python-dictreader-skipping-rows-with-missing-columns Turns out I was being silly, and using the wrong ID field. I'm using Python 3.x here. I have a dict of employees, indexed by a string, "directory_id". Each value is a nested dict with employee attributes (phone number, surname etc.). One of these values is a secondary ID, say "internal_id", and another is their manager, call it "manager_internal_id". The "internal_id" field is non-mandatory, and not every employee has one. (I've simplified the fields a little, both to make it easier to read, and also for privacy/compliance reasons). The issue here is that we index (key) each employee by their directory_id, but when we lookup their manager, we need to find managers by their "internal_id". Before, when employee.keys() was a list of internal_ids, I was using a membership check on this. Now, the last part of my if statement won't work, since the internal_ids is part of the dict values, instead of the key itself. def lookup_supervisor(manager_internal_id, employees): if manager_internal_idis not None and manager_internal_id!= "" and manager_internal_id in employees.keys(): return (employees[manager_internal_id]['mail'], employees[manager_internal_id]['givenName'], employees[manager_internal_id]['sn']) else: return ('Supervisor Not Found', 'Supervisor Not Found', 'Supervisor Not Found') So the first question is, how do I check whether the manager_internal_id is present in the dict's values. I've tried substituting employee.keys() with employee.values(), that didn't work. Also, I'm hoping for something a little more efficient, not sure if there's a way to get a subset of the values, specifically, all the entries for employees[directory_id]['internal_id']. Hopefully there's some Pythonic way of doing this, without using a massive heap of nested for/if loops. My second question is, how do I then cleanly return the required employee attributes (mail, givenname, surname etc.). My for loop is iterating over each employee, and calling lookup_supervisor. I'm feeling a bit stupid/stumped here. def tidy_data(employees): for directory_id, data in employees.items(): # We really shouldnt' be passing employees back and forth like this - hmm, classes? data['SupervisorEmail'], data['SupervisorFirstName'], data['SupervisorSurname'] = lookup_supervisor(data['manager_internal_id'], employees) Thanks in advance =), Victor

    Read the article

  • MUD (game) design concept question about timed events.

    - by mudder
    I'm trying my hand at building a MUD (multiplayer interactive-fiction game) I'm in the design/conceptualizing phase and I've run into a problem that I can't come up with a solution for. I'm hoping some more experienced programmers will have some advice. Here's the problem as best I can explain it. When the player decides to perform an action he sends a command to the server. the server then processes the command, determines whether or not the action can be performed, and either does it or responds with a reason as to why it could not be done. One reason that an action might fail is that the player is busy doing something else. For instance, if a player is mid-fight and has just swung a massive broadsword, it might take 3 seconds before he can repeat this action. If the player attempts to swing again to soon, the game will respond indicating that he must wait x seconds before doing that. Now, this I can probably design without much trouble. The problem I'm having is how I can replicate this behavior from AI creatures. All of the events that are being performed by the server ON ITS OWN, aka not as an immediate reaction to something a player has done, will have to be time sensitive. Some evil monster has cast a spell on you but must wait 30 seconds before doing it again... I think I'll probably be adding all these events to some kind of event queue, but how can I make that event queue time sensitive?

    Read the article

  • Parsing Chunk of Data into Hash of Array With Perl

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • How do I do MongoDB console-style queries in PHP?

    - by Zoe Boles
    I'm trying to get a MongoDB query from the javascript console into my PHP app. What I'm trying to avoid is having to translate the query into the PHP "native driver"'s format... I don't want to hand build arrays and hand-chain functions any more than I want to manually build an array of MySQL's internal query structure just to get data. I already have a string producing the exact content I want in the Mongo console: db.intake.find({"processed": {"$exists": "false"}}).sort({"insert_date": "1"}).limit(10); The question is, is there a way for me to hand this string, as is, to MongoDB and have it return a cursor with the dataset I request? Right now I'm at the "write your own parser because it's not valid json to kinda turn a subset of valid Mongo queries into the format the PHP native driver wants" state, which isn't very fun. I don't want an ORM or a massive wrapper library; I just want to give a function my query string as it exists in the console and get an Iterator back that I can work with. I know there are a couple of PHP-based Mongo manager applications that apparently take console-style queries and handle them, but initial browsing through their code, I'm not sure how they handle the translation. I absolutely love working with mongo in the console, but I'm rapidly starting to loathe the thought of converting every query into the format the native writer wants...

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Including/Organzing HTML in large javascript project

    - by Bill Zimmerman
    Hi, I've a got a fairly large web app, with several mini applets on each page. These applets are almost always identical jquery apps. I am looking for advice on how I should organize/include smaller parts of these jquery apps within my larger project. For example, each app has several independent tabs. If possible, I would like to store each of the tabs as a seperate .html file because this makes development easier. My requirements are: 1) All of the html 'tabs' are loaded on the clients end when the page loads. I would like to avoid any delays by dynamically requesting the tab html. 2) If possible, I would like to minimize the raw data sent. For example, it would be preferable to send each tab 1 time, instead of sending each tab 10 times if there are ten applets on that page. Questions: 1) What are my options for 'including' the HTML files / javascript code 2) Any tips for keeping my development simple in this situation? Surely there has to be a better way than just editing one massive html file when working with large pages.

    Read the article

  • How do I set up FirePHP version 1.0?

    - by jay
    I love FirePHP and I've been using it for a while, but they've put out this massive upgrade and I'm completely flummoxed trying to get it to work. I think I'm copying the "Quick Start" code (kind of guessing at whatever changes are necessary for my server configuration), but for some reason, FirePHP's "primary" function, FirePHP::to() isn't doing anything. Can anyone please help me figure out what I'm doing wrong? Thanks. <?php define('INSIGHT_IPS', '*'); define('INSIGHT_AUTHKEYS', '290AA9215205F24E5104F48D61B60FFC'); define('INSIGHT_PATHS', __DIR__); define('INSIGHT_SERVER_PATH', '/doc_root/hello_firephp2.php'); set_include_path(get_include_path . ":/home8/jayharri/php/FirePHP/lib"); // path to FirePHP library require_once('FirePHP/Init.php'); $inpector = FirePHP::to('page'); var_dump($inspector); $console = $inspector->console(); $console->log('hello firephp'); ?> Output: NULL Fatal error: Call to a member function console() on a non-object in /home8/jayharri/public_html/if/doc_root/hello_firephp2.php on line 14

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Binding ComboBoxes to enums... in Silverlight!

    - by Domenic
    So, the web, and StackOverflow, have plenty of nice answers for how to bind a combobox to an enum property in WPF. But Silverlight is missing all of the features that make this possible :(. For example: You can't use a generic EnumDisplayer-style IValueConverter that accepts a type parameter, since Silverlight doesn't support x:Type. You can't use ObjectDataProvider, like in this approach, since it doesn't exist in Silverlight. You can't use a custom markup extension like in the comments on the link from #2, since markup extensions don't exist in Silverlight. You can't do a version of #1 using generics instead of Type properties of the object, since generics aren't supported in XAML (and the hacks to make them work all depend on markup extensions, not supported in Silverlight). Massive fail! As I see it, the only way to make this work is to either Cheat and bind to a string property in my ViewModel, whose setter/getter does the conversion, loading values into the ComboBox using code-behind in the View. Make a custom IValueConverter for every enum I want to bind to. Are there any alternatives that are more generic, i.e. don't involve writing the same code over and over for every enum I want? I suppose I could do solution #2 using a generic class accepting the enum as a type parameter, and then create new classes for every enum I want that are simply class MyEnumConverter : GenericEnumConverter<MyEnum> {} What are your thoughts, guys?

    Read the article

  • How OpenStack Swift handles concurrent restful API request?

    - by Chen Xie
    I installed a swift service and was trying to know the capability of handling concurrent request. So I created massive amount of threads in Java, and sent it via the RestFUL API Not surprisingly, when the number of requests climb up, the program started to throw out exceptions. Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) But can anyone tell me how that time outhappened? I am curious of how SWIFT handles those requests. Is that by queuing the requests and because there are too many requests in the queue and wait for too long time and it's just get kicked out from the queue? If this holds, does it mean that it's an asynchronized mechanism to handle requests? Thanks.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >