Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 79/371 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Passing array values in an HTTP request in .NET

    - by Zarjay
    What's the standard way of passing and processing an array in an HTTP request in .NET? I have a solution, but I don't know if it's the best approach. Here's my solution: <form action="myhandler.ashx" method="post"> <input type="checkbox" name="user" value="Aaron" /> <input type="checkbox" name="user" value="Bobby" /> <input type="checkbox" name="user" value="Jimmy" /> <input type="checkbox" name="user" value="Kelly" /> <input type="checkbox" name="user" value="Simon" /> <input type="checkbox" name="user" value="TJ" /> <input type="submit" value="Submit" /> </form> The ASHX handler receives the "user" parameter as a comma-delimited string. You can get the values easily by splitting the string: public void ProcessRequest(HttpContext context) { string[] users = context.Request.Form["user"].Split(','); } So, I already have an answer to my problem: assign multiple values to the same parameter name, assume the ASHX handler receives it as a comma-delimited string, and split the string. My question is whether or not this is how it's typically done in .NET. What's the standard practice for this? Is there a simpler way to grab the multiple values than assuming that the value is comma-delimited and calling Split() on it? Is this how arrays are typically passed in .NET, or is XML used instead? Does anyone have any insight on whether or not this is the best approach?

    Read the article

  • ASP.NET retrieve Average CPU Usage

    - by Sam
    Last night I did a load test on a site. I found that one of my shared caches is a bottleneck. I'm using a ReaderWriterLockSlim to control the updates of the data. Unfortunately at one point there are ~200 requests trying to update the data at approximately the same time. This also coincided with CPU usage spikes. The data being updated is in the ASP.NET Cache. What I'd like to do is if the CPU usage is around 75%, I'd like to just skip the cache and hit the database on another machine. My problem is that I don't know how expensive it is to create a new performance counter to check the cpu usage. Also, if I would probably like the average cpu usage over the last 2 or 3 seconds. However, I can't sit there and calculate the cpu time as that would take longer than it's taking to update the cache currently. Is there an easy way to get the average CPU usage? Are there any drawbacks to this? I'm also considering totaling the wait count for the lock and then at a certain threshold switch over to the database. The concern I had with this approach would be that changing hardware might allow more locks with less of a strain on the system. And also finding the right balance for the threshold would be cumbersome and it doesn't take into account any other load on the machine. But it's a simple approach, and simple is 99% of the time better.

    Read the article

  • are projects with high developer turn over rate really a bad thing?

    - by John
    I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mozaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad. What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll just find another developer." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "If you think it's a bad idea to build this, I'll just find another (possibly cheaper) developer to do what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for the developers that go AWOL 3-4 months after working with poor code, strict timelines, and little feedback. So my question is the following: Are the following symptoms of a project really such a bad thing for business? high developer turn over rate poorly built technology - often a patchwork of different and inappropriately used architectural styles owners without a clear roadmap for their web project, and they request features on a whim I've seen numerous businesses prosper while experiencing the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I'm forced to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Web Grid, Client side Binding VS. Server side HTML generation

    - by Ron Harlev
    I'm working on replacing an existing web grid in an ASP.NET web application, with a new implementation. The existing grid is powerful, but not flexible enough. It also brings with it all kind of frameworks we don't like to have on our web pages. While looking into existing options I noticed I can break the available solutions into two main approaches. The older approach is represented best by the ASP.NET GridView. This is a classic ASP.NET control that generates the needed HTML on the server, based on a given set of data. The newer approach is depending on client side rendering, mainly with jQuery. A good example would be jqGrid. Only the data is sent to the client (Usually with JSON or XML) In the GridView case, if I want an AJAX behavior, I would have to implement it with something like an update panel. Is there a definitive choice I should make? Is there a good chance of achieving the same snappy behavior I get with jqGrid (even with many records), with server side rendered controls? Is there some hybrid implementation incorporating both approaches?

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • WPF : Access Application Resources when not referencing Shell from App.xaml

    - by CF_Maintainer
    I am beginner in WPF. My App.xaml looks like below app.xaml <Application x:Class="ContactManager.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Application.Resources> <Color x:Key="lightBlueColor">#FF145E9D</Color> <SolidColorBrush x:Key="lightBlueBrush" Color="{StaticResource lightBlueColor}" /> </Application.Resources> I do not set the startupuri since I want to a presenter first approach. I do the following in app.xaml.cs protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); var appPresenter = new ApplicationPresenter( new Shell(), new ContactRepository()); appPresenter.LaunchView(); } I have a usercontrol called "SearchBar.xaml" which references "lightBlueBrush" as a staticResource. When I try to open "Shell.xaml" in the designer it tells me : The "shell.xaml" cannot be loaded at design time because it says it could not create an instance of type "SearchBar.xaml". When I debugged the devenv.exe using another visual studio instance it tells me that it does not have access to the Brush I created in app.resources. If one is doing a Presenter first approach, how does one access resources? I had this working when the startupURI was "Shell.xaml" and the startup event was not present. Any clues/ideas/suggestions. I am just trying to understand. Everything works as expected when I run the application just not @ design time.

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • OO Objective-C design with XML parsing

    - by brainfsck
    Hi, I need to parse an XML record that represents a QuizQuestion. The "type" attribute tells the type of question. I then need to create an appropriate subclass of QuizQuestion based on the question type. The following code works ([auto]release statements omitted for clarity): QuizQuestion *question = [[QuizQuestion alloc] initWithXMLString:xml]; if( [ [question type] isEqualToString:@"multipleChoiceQuestion"] ) { [myQuestions addObject:[[MultipleChoiceQuizQuestion alloc] initWithXMLString:xml]; } //QuizQuestion.m -(id)initWithXMLString:(NSString*)xml { self.type = ...// parse "type" attribute from xml // parse the rest of the xml } //MultipleChoiceQuizQuestion.m -(id)initWithXMLString:(NSString*)xml { if( self= [super initWithXMLString:xml] ) { // multiple-choice stuff } } Of course, this means that the XML is parsed twice: once to find out the type of QuizQuestion, and once when the appropriate QuizQuestion is initialized. To prevent parsing the XML twice, I tried the following approach: // MultipleChoiceQuizQuestion.m -(id)initWithQuizRecord:(QuizQuestion*)record { self=record; // record has already parsed the "type" and other parameters // multiple-choice stuff } However, this fails due to the "self=record" assignment; whenever the MultipleChoiceQuizQuestion tries to call an instance-method, it tries to call the method on the QuizQuestion class instead. Can someone tell me the correct approach for parsing XML into the appropriate subclass when the parent class needs to be initialized to know which subclass is appropriate?

    Read the article

  • backbone js or knockout js as a web framework with jquery mobile

    - by Dan
    without trying to cause a mass discussion I would like some advice from the fellow users of stack overflow. I am about to start building a mobile website that gets it data from JSON that comes from a PHP rest api. I have looked into different mobile frameworks and feel that JQM will work best for us as we have the knowledge of jQuery even though a little large. Currently at work however we are using jQuery for all our sites and realise that now we are building a mobile website I need to think about javascript frameworks to move us onto a more MV* approach, which I understand the benefits of and will bring much needed structure to this mobile site and future web applications we may bring. I have made a comparision table where I have managed to bring the selection down to 2 - backbone and knockout. I have been looking around the web and it seems that there is more support for backbone in general and maybe even more support for backbone with JQM. http://jquerymobile.com/test/docs/pages/backbone-require.html One thing I have noticed however is that backbone doesnt support view bindings (declarative approach) whereas knockout does - is this a massive bonus? one of the main reasons for using a mv* for us is to get more structure - so I would like to use the library that will intergrate best with jQuery and especially jQuery mobile. neither of them seem to have that similar syntax... Thanks

    Read the article

  • How to connect to Oracle using JRuby & JDBC

    - by Rob
    First approach: bare metal require 'java' require 'rubygems' require "c:/ruby/jruby-1.2.0/lib/ojdbc14.jar" # should be redundant, but tried it anyway odriver = Java::JavaClass.for_name("oracle.jdbc.driver.OracleDriver") puts odriver.java_class url = "jdbc:oracle:thin:@myhost:1521:mydb" puts "About to connect..." con = java.sql.DriverManager.getConnection(url, "myuser", "mypassword"); if con puts " connection good" else puts " connection failed" end The result of the above is: sqltest.rb:4: cannot load Java class oracle.jdbc.driver.OracleDriver (NameError) Second approach: Active Record require 'rubygems' gem 'ActiveRecord-JDBC' require 'jdbc_adapter' require 'active_record' require 'active_record/version' require "c:/ruby/jruby-1.2.0/lib/ojdbc14.jar" # should be redundant... ActiveRecord::Base.establish_connection( :adapter => 'jdbc', :driver => 'oracle.jdbc.driver.OracleDriver', :url => 'jdbc:oracle:thin:@myhost:1521:mydb', :username=>'myuser', :password=>'mypassword' ) ActiveRecord::Base.connection.execute("SELECT * FROM mytable") The result of this is: C:/ruby/jruby-1.2.0/lib/ruby/gems/1.8/gems/activerecord-jdbc-adapter-0.9.1/lib/active_recordconnection_adapters/jdbc_adapter.rb:330:in `initialize': The driver encountered an error: cannot load Java class oracle.jdbc.driver.OracleDriver (RuntimeError) Essentially the same error no matter how I go about it. I'm using JRuby 1.2.0 and I have ojdbc14.jar in my JRuby lib directory Gems: ActiveRecord-JDBC (0.5) activerecord-jdbc-adapter (0.9.1) activerecord (2.2.2) What am I missing? Thanks,

    Read the article

  • With Google Website Optimizer's multivariate testing, can I vary multiple css classes on a single di

    - by brahn
    I would like to use Google Website Optimizer (GWO)'s multivariate tests to test some different versions of a web page. I can change from version to version just by varying some class tags on a div, i.e. the different versions are of this form: <div id="testing" class="foo1 bar1">content</div> <div id="testing" class="foo1 bar2">content</div> <div id="testing" class="foo2 bar1">content</div> <div id="testing" class="foo2 bar2">content</div> In the ideal, I would be able to use GWO section code in place of each class, and google would just swap in the appropriate tags (foo1 or foo2, bar1 or bar2). However, naively doing this results in horribly malformed code because I would be trying to put <script> tags inside the div's class attribute: <div id="testing" class=" <script>utmx_section("foo-class")</script>foo1</noscript> <script>utmx_section("bar-class")</script>bar1</noscript> "> content </div> And indeed, the browser chokes all over it. My current best approach is just to use a different div for each variable in the test, as follows: <script>utmx_section("foo-class-div")</script> <div class="foo1"> </noscript> <script>utmx_section("bar-class-div")</script> <div class="bar1"> </noscript> content </div> </div> So testing multiple variables requires layer of div-nesting per variable, and it all seems rather awkward. Is there a better approach that I could use in which I just vary the classes on a single div?

    Read the article

  • retrieving object information with Doctrine

    - by ajsie
    i want to fetch information from the database using objects. i really like this approach cause this is more OOP: $user = Doctrine_Core::getTable('User')->find(1); echo $user->Email['address']; echo $user->Phonenumbers[0]->phonenumber; rather than: $q = Doctrine_Query::create() ->from('User u') ->leftJoin('u.Email e') ->leftJoin('u.Phonenumbers p') ->where('u.id = ?', 1); $user = $q->fetchOne(); echo $user->Email['address']; echo $user->Phonenumbers[0]['phonenumber']; the problem is that the first one uses 3 queries (3 different tables), while the second one uses only 1 (and is therefore recommended technique). but i feel that it destroys the object oriented design. cause ORM is meant to give us an OOP approach so that we could focus on objects and not the relational database. but now they want us to go back to use SQL like pattern. there isn't a way to get information form multiple tables not using DQL? the above examples are taken from the documentation: doctrine

    Read the article

  • Nested parsers in happy / infinite loop?

    - by McManiaC
    I'm trying to write a parser for a simple markup language with happy. Currently, I'm having some issues with infinit loops and nested elements. My markup language basicly consists of two elements, one for "normal" text and one for bold/emphasized text. data Markup = MarkupText String | MarkupEmph [Markup] For example, a text like Foo *bar* should get parsed as [MarkupText "Foo ", MarkupEmph [MarkupText "bar"]]. Lexing of that example works fine, but the parsing it results in an infinite loop - and I can't see why. This is my current approach: -- The main parser: Parsing a list of "Markup" Markups :: { [Markup] } : Markups Markup { $1 ++ [$2] } | Markup { [$1] } -- One single markup element Markup :: { Markup } : '*' Markups1 '*' { MarkupEmph $2 } | Markup1 { $1 } -- The nested list inside *..* Markups1 :: { [Markup] } : Markups1 Markup1 { $1 ++ [$2] } | Markup1 { [$1] } -- Markup which is always available: Markup1 :: { Markup } : String { MarkupText $1 } What's wrong with that approach? How could the be resolved? Update: Sorry. Lexing wasn't working as expected. The infinit loop was inside the lexer. Sorry. :) Update 2: On request, I'm using this as lexer: lexer :: String -> [Token] lexer [] = [] lexer str@(c:cs) | c == '*' = TokenSymbol "*" : lexer cs -- ...more rules... | otherwise = TokenString val : lexer rest where (val, rest) = span isValidChar str isValidChar = (/= '*') The infinit recursion occured because I had lexer str instead of lexer cs in that first rule for '*'. Didn't see it because my actual code was a bit more complex. :)

    Read the article

  • Formatting Parameters for Ajax POST request to Rails Controller - for jQuery-UI sortable list

    - by Hung Luu
    I'm using the jQuery-UI Sortable Connected Lists. I'm saving the order of the connected lists to a Rails server. My approach is to grab the list ID, column ID and index position of each list item. I want to then wrap this into an object that can be passed as a parameter back to the Rails Controller to be saved into the database. So ideally i'm looking to format the parameter like this: Parameters: {"Activity"=>[{id:1,column:2,position:1},{id:2,column:2,position:2} ,...]} How do I properly format my parameters to be passed in this Ajax POST request? Right now, with the approach below, I'm passing on Parameters: {"undefined"=>""} This is my current jQuery code (Coffeescript) which doesn't work: jQuery -> $('[id*="day"]').sortable( connectWith: ".day" placeholder: "ui-state-highlight" update: (event, ui) -> neworder = new Array() $('[id*="day"] > li').each -> column = $(this).attr("id") index = ui.item.index() + 1 id = $("#" + column + " li:nth-child(" + index + ") ").attr('id') passObject={} passObject.id = id passObject.column = column passObject.index = index neworder.push(passObject) alert neworder $.ajax url: "sort" type: "POST" data: neworder ).disableSelection() My apologies because this seems like a really amateur question but I'm just getting started with programming jQuery and Javascript.

    Read the article

  • Bespoke Development or Leverage SharePoint With Web Parts etc?

    - by Asim
    Hi all, We are currently in the process of drawing up a solution for an existing client, creating a number of eServices. The client currently have MOSS 2007. The proposed solution is to use MOSS as the launching pad for the eServices… The requirement involves drawing up several online forms which provide registration facilities as well as facilitating a workflow of some sort. I have been told that the proposed solution requires complex web forms. Most are complex forms with parent child details that have multiple windows. The proposed solution is to do some bespoke development, developing ASP .NET forms. These forms would be deployed under the _layouts folder of the current MOSS portal, inheriting the master page design on the current site. I have been told that this approach make development and deployment more simple, as well has having ‘complete integration’ with MOSS. My questions are: Is this the best way to leverage SharePoint – it seems like the proposed solution is not leveraging MOSS at all..! I thought perhaps utilizing Web Parts would be better, but I have been told that this is more complex and developing more smarter intuitive UI is more difficult. Is this really the case? If not, what should be the recommended approach? We will be utilizing Ultimus as the workflow engine. However, I have been recommended K2 Workflows. Anyone used both/have any opinions on either? Many thanks in advance! Kind Regards,

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • Loading enumerations from database

    - by Mosh
    Hello, I have a problem with mapping .NET enumerations to database tables. Imagine I have a table called Statuses with the following values: StatusID | Name 1 Draft 2 Ready ... ... In the application layer, I can either use a Repository to get all Statuses as an IList object. However, the problem with this approach is that I cannot reference a certain status in my business logic. For example, how can I implement something like this? if (myObject.Status is Ready) do this else if (myObject.Status is Draft) do that... Since the statuses are loaded dynamically, I cannot tell for sure what particular Status object in the List represents the Draft or Ready status. Alternatively, I could just use an enumeration like public enum Statuses { Draft, Ready }; Then I could easily use an enumeration in my business logic. if (myObject.Status == Statuses.Draft) // do something... However, the problem with this approach is that every time the user wants to modify the list of Statuses (adding a new status, or renaming an existing status) the application has to be re-compiled. We cannot load the statuses dynamically from the database. Has anyone else come across a similar situation? Any solutions/patterns? Cheers, Mosh

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >