Search Results

Search found 14874 results on 595 pages for 'mysql connector'.

Page 570/595 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Database relationships using phpmyAdmin (composite keys)

    - by Cool Hand Luke UK
    Hi, I hope this question is not me being dense. I am using phpmyAdmin to create a database. I have the following four tables. Don't worry about that fact place and price are optional they just are. Person (Mandatory) Item (Mandatory) Place (Optional) Price (Optional) Item is the main table. It will always have person linked. * I know you do joins in mysql for the tables. If I want to link the tables together I could use composite keys (using the ids from each table), however is this the most correct way to link the tables? It also means item will have 5 ids including its own. This all cause null values (apparently a big no no, which I can understand) because if place and price are optional and are not used on one entry to the items table I will have a null value there. Please help! Thanks in advance. I hope this makes sense.

    Read the article

  • How to estimate tomcat server requirements?

    - by Daniil
    We have a brand new webapp written that runs on Tomcat. So far, only one client is using it through the day. They run about 180 unique logins a day. Not really a lot IMO. Now, we managed to sell it to a brand new client who likes and wants to roll it out to 50,000 clients. How many of them will login at the same time - no idea. But I need to do the whole thing - allocate, create, config and maintain. OK - last is simple(errrr). The application runs off of Tomcat 5.5 on Gentoo (I'm thinking to upgrade to Tomcat 6) with MSSQL & mySQL behind. I do realize that a more enterprise ready application would be a better fit, but that's not an option at the moment. Since I've never done this before, I'm a bit lost. Can someone advice on how to go about estimating the equipment requirements for this client? Tomcat does have clustering, so that I can do. MS SQL - I'm sure they have something too. I'm thinking to stick it behind LVS (which we do use at the moment for something else too). Any help from people who deal with these details is greatly appreciated!

    Read the article

  • SQL Express Edition, SQL Compact Editin and SQLCMD for learning purpose

    - by Mil
    Hi, I want to learn programming in SQL from some SQL tutorial sites of which I heard of here but I need some environment for executing query's. I think I have both SQL CE and SQL EE installed on my computer but I have some doubts about these DBMS and I don't know exactly how to use SQLCMD utility so I hope someone here will have time and will to explain me the following: Since running sqlcmd -S.\sqlexpress at command prompt command gives "1" prompt I assume I have SQL express installed but anyway how can I be sure what I have installed on my machine since I cannot find in installed programs SQL Express Edition name? Can I ship and use database with my C# (VC# Express) application which was created with SQL EE (embedded?)? How can use sqlcmd for learning SQL, that is by issuing commands like create, use, select..., again emphasize is on learning SQL I do not want to run scripts but use interactive command prompt like with MySQL (since I want to use SQL I would pretty much like to avoid graphical tools for DBMS)? Please tell me if you have some other advice regarding as to what should I better use in learning how to program in SQL or should I stick with the above for now. Thanks in advance.

    Read the article

  • Codeigniter Pagination: Run the Query Twice?

    - by Frank
    I'm using codeigniter and the pagination class. This is such a basic question, but I need to make sure I'm not missing something. In order to get the config items necessary to paginate results getting them from a MySQL database it's basically necessary to run the query twice is that right? In other words, you have to run the query to determine the total number of records before you can paginate. So I'm doing it like: Do this query to get number of results $this->db->where('something', $something); $query = $this->db->get('the_table_name'); $num_rows = $query->num_rows(); Then I'll have to do it again to get the results with the limit and offset. Something like: $this->db->where('something', $something); $this->db->limit($limit, $offset); $query = $this->db->get('the_table_name'); if($query->num_rows()){ foreach($query->result_array() as $row){ ## get the results here } } I just wonder if I'm actually doing this right in that the query always needs to be run twice? The queries I'm using are much more complex than what is shown above.

    Read the article

  • Changing the color of dot depending from value on Morris js graph

    - by Michal Lipa
    Im rendering graph by morris js. Im using data from mysql database by JSON. Everything works fine, but I would like to add one more feature to the graph. (change dot color if there is something in buy action). My JSON: [{"longdate":"2014-08-20 18:20:01","price":"1620","action":"buy"},{"longdate":"2014-08-20 18:40:01","price":"1640","action":""},{"longdate":"2014-08-20 19:00:01","price":"1620","action":""}] So I would like to change dot color for values with buy action. My code for graph: $.getJSON('results.json', function(day_data) { Morris.Line({ element: 'graph', data: day_data, xkey: 'longdate', ykeys: ['price'], labels: ['Cena'], lineColors: lineColor, pointSize: 0, hoverCallback: function(index, options, content) { var date = "<b><font color='black'>Data: "+day_data[index]['longdate']+"</font></b><br>"; var param1 = "<font color='"+lineColor[0]+"'>Cena - "+day_data[index]['price']+"</font><br>"; return date+param1; }, xLabelFormat : function (x) { return changeDateFormat(x); } /*My TRIAL if(action == 'buy'){ pointSize: 4, lineColors: green, } */ }); }); So my code doesnt work, how can I make this working?

    Read the article

  • Ajax long polling (comet) + php on lighttpd v1.4.22 multiple instances problem.

    - by fibonacci
    Hi, I am new to this site, so I really hope I will provide all the necessary information regarding my question. I've been trying to create a "new message arrived notification" using long polling. Currently I am initiating the polling request by window.onLoad event of each page in my site. On the server side I have an infinite loop: while(1){ if(NewMessageArrived($current_user))break; sleep(10); } echo $newMessageCount; On the client side I have the following (simplified) ajax functions: poll_new_messages(){ xmlhttp=GetXmlHttpObject(); //... xmlhttp.onreadystatechange=got_new_message_count; //... xmlhttp.send(); } got_new_message_count(){ if (xmlhttp.readyState==4){ updateMessageCount(xmlhttp.responseText); //... poll_new_messages(); } } The problem is that with each page load, the above loop starts again. The result is multiple infinite loops for each user that eventually make my server hang. *The NewMessageArived() function queries MySQL DB for new unread messages. *At the beginning of the php script I run start_session() in order to obtain the $current_user value. I am currently the only user of this site so it is easy for me to debug this behavior by writing time() to a file inside this loop. What I see is that the file is being written more often than once in 10 seconds, but it starts only when I go from page to page. Please let me know if any additional information might help. Thank you.

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • Unable to render php files in browser

    - by p1
    Hello, I am very new to php, and I am trying to develop a facebook application using php. I am using Joyent as my hosting platform. Currently, I am trying to do some simple scripts in php and then build on them. However I am unable to see any php files being rendered properly in my application. For eg: I have a simple script called phpinfo.php: If I execute this on terminal like php phpinfo.php , I can see all the configurations. However if I try to access the same file as http://xxxxxx.facebook.joyent.us/phpinfo.php, I get : Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Even if I rename this file to index.php its still the same. However I am able to access other html files [index.html] on the same location . These are some of my php settings: These are some of the settings: [fbkusoni:~/web/public] aafhe7vh$ php phpinfo.php | grep On allow_url_fopen = On = On auto_globals_jit = On = On enable_dl = On = On file_uploads = On = On ignore_repeated_errors = On = On ignore_repeated_source = On = On implicit_flush = On = On log_errors = On = On register_argc_argv = On = On report_memleaks = On = On y2k_compliance = On = On Multibyte regex (oniguruma) backtrack check = On mysql.allow_persistent = On = On session.bug_compat_warn = On = On session.use_cookies = On = On suhosin.cookie.cryptdocroot = On = On suhosin.cookie.cryptua = On = On suhosin.mt_srand.ignore = On = On suhosin.protectkey = On = On suhosin.server.encode = On = On suhosin.server.strip = On = On suhosin.session.cryptdocroot = On = On suhosin.session.cryptua = On = On suhosin.session.encrypt = On = On suhosin.srand.ignore = On = On suhosin.stealth = On = On The answer might be very naive, but I am just trying to get started and looking for any suggestions regarding this and also using Joyent and cakephp to develop facebook applications. Thanks.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • tips for fixing bad coding/dev habits ?

    - by dfafa
    i want to become a better coder....so i have decided to sign up for computing science program...maybe a formal education can assist me. i started working on smaller projects to learn but currently i have really bad coding/dev habits which is hindering my productivity as the codebase increases.... i have highlighted them and perhaps someone could make suggestions (or redirect to resources) or a more efficient method. most stuff that i made in the past were web apps. i usually develop with putty + nano...i just love the minimalist feel i use winscp and develop directly on my private web server...too lazy to do it on localhost and upload it later. i dont use subversion control...which one do i need ? sometimes ctrl +z doesn't work well. when i run out of ideas for naming variable, i use swear words instead. i swear a lot when i get stuck....how to deal with anger issue ? my codes look ugly with comments everywhere. would rather use procedural coding finds "thinking" in OO difficult and time consuming i "write first think later". refactors code only if i am getting paid for it. dislikes configuring linux distro, Apache, MySQL, scaling, designing graphics and layouts. does not like writing tests likes working alone. does not like sharing codes. has an econ degree dislikes reading other people's code would rather write it on my own it seems my only true desire is to translate my ideas to a working prototype as fast as possible....it seems like i am very uninterested in the other details...could it be that i am not cut out to be a coder after all ? is going back to study comp sci a bad idea ?

    Read the article

  • Spring MVC + Hibernate encoding problem

    - by Bar
    I work on Spring MVC + Hibernate application, use MySQL (ver. 5.0.51a) with the InnoDB engine. The problem appears when I am sending a form with cyrillic characters. As the result, database contains senseless chars in unknown encoding. All the JSP pages, database (+ tables and fields) created using UTF-8. Hibernate config also contains property which sets encoding to UTF-8. I had solved this by creating filter which encodes request content with UTF-8. Exemplary code: … encoding = "UTF-8"; request.setCharacterEncoding(encoding); chain.doFilter(request, response); … But it visibly slows down the app. The interesting thing is that executing insert query directly from the app (i.e. running from Eclipse as Java Application) works perfect. Any suggestions are welcome. TIA, Michael.

    Read the article

  • I need to get form data from multiple forms on one page using $_POST

    - by CDeanMartin
    My project is a menu that displays daily specials at a cafe. The Pointy Haired Boss(PHB) needs to add/remove items from the menu on a daily basis, so I stored all dishes with MySQL, and created a page which will load all menu items as buttons. When clicked, the button will UPDATE the item, turning it on or off. I need form data to detect which button was pressed, so my query knows which $menuItem to UPDATE. That is the purpose of the hidden fields. <html><head></head> <body> <html><head></head> <body> <?php include("getElement.php"); $keys = array_keys($_POST); echo $keys[0]; echo $keys[1]; //if(isset($_POST["menuItem"])){ //toggleItem($_POST["menuItem"]); //echo print_r(array_keys($_POST));} ?> <form name="b" action="scratchpad.php" method="post" > <input type="hidden" name="b" value="Cajun Gumbo"/> <input type="submit" style="color:blue" value="Cajun Gumbo" /> </form> <form name="a" action="scratchpad.php" method="post" > <input type="hidden" name="a" value="Guacomole Burger"/> <input type="submit" style="color:blue" value="Guacomole Burger" /> </form> </body> </html> Can I get $_POST to identify which button was pressed?

    Read the article

  • PDO prepare silently fails

    - by Wabbitseason
    I'm experimenting with PHP's session_set_save_handler and I'd like to use a PDO connection to store session data. I have this function as a callback for write actions: function _write($id, $data) { logger('_WRITE ' . $id . ' ' . $data); try { $access = time(); $sql = 'REPLACE INTO sessions SET id=:id, access=:access, data=:data'; logger('This is the last line in this function that appears in the log.'); $stmt = $GLOBALS['db']->prepare($sql); logger('This never gets logged! :('); $stmt->bindParam(':id', $id, PDO::PARAM_STR); $stmt->bindParam(':access', $access, PDO::PARAM_INT); $stmt->bindParam(':data', $data, PDO::PARAM_STR); $stmt->execute(); $stmt->closeCursor(); return true; } catch (PDOException $e) { logger('This is never executed.'); logger($e->getTraceAsString()); } } The first two log messages always show up, but the third one right after $stmt = $GLOBALS['db']->prepare($sql) never makes it to the log file and there's no trace of an exception either. The sessions db table remains empty. The log message from the _close callback is always present. Here's how I connect to the database: $db = new PDO('mysql:host=' . DBHOST . ';dbname=' . DBNAME, DBUSER, DBPASS); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); I have PHP 5.2.10. I tried to simply run $GLOBALS['db']->exec($sql) with a "manually prepared" $sql content, but it still failed silently. The query itself is all right I was able to execute it via the db console.

    Read the article

  • Where did Pylons beautiful error handling go? Using Nginx + Paster + Flup#fcgi_thread

    - by Tony
    I need to run my development through nginx due to some complicated subdomain routing rules in my pylons app that wouldn't be handled otherwise. I had been using lighttpd + paster + Flup#scgi_thread and the nice error reporting by Pylons had been working fine in that environment. Yesterday I recompiled Python and MySQL for 64bit, and also switched to Ngix + paster + Flup#fcgi_thread for my development environment. Everything is working great, but I miss the fancy error reports. This is what I get now, and it is a mess compared to what I got used to: http://drp.ly/Iygeg . And here are the pylons/nginx configs. Pylons: [server:main] use = egg:Flup#fcgi_thread host = 0.0.0.0 port = 6500 Nginx: location / { #include /usr/local/nginx/conf/fastcgi.conf; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_pass 127.0.0.1:6500; }

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • How does real world login process happen in web application in Java?

    - by Nitesh Panchal
    Hello, I am very much confused regarding login process that happen in Java web application. I read many tutorials regarding jdbcRealm and JAAS. But, one thing that i don't understand is that why should i use them ? Can't i simply check directly against my database of users? and once they successfully login to the site, i store some variable in session as a flag. And probably check that session variable on all restricted pages (I mean keep a filter for restricted resources url pattern).If the flag doesn't exist simply redirect the user to login page. Is this approach correct?Does this approch sound correct? If yes, then why did all this JAAS and jdbcRealm came into existence? Secondly, I am trying to completely implement SAS(Software as service) in my web application, meaning everything is done through web services.If i use webservices, is it possible to use jdbcRealm? If not, then is it possible to use JAAS? If yes, then please show me some example which uses mySql as a database and then authenticates and authorizes. I even heard about Spring Security. But, i am confused about that too in the sense that how do i use webservice with Spring Security. Please help me. I am really very confused. I read sun's tutorials but they only keep talking about theories. For programmers to understand a simple concept, they show a 100 page theory first before they finally come to one example.

    Read the article

  • Output reformatted text within a file included in a JSP

    - by javanix
    I have a few HTML files that I'd like to include via tags in my webapp. Within some of the files, I have pseudo-dynamic code - specially formatted bits of text that, at runtime, I'd like to be resolved to their respective bits of data in a MySQL table. For instance, the HTML file might include a line that says: Welcome, [username]. I want this resolved to (via a logged-in user's data): Welcome, [email protected]. This would be simple to do in a JSP file, but requirements dictate that the files will be created by people who know basic HTML, but not JSP. Simple text-tags like this should be easy enough for me to explain to them, however. I have the code set up to do resolutions like that for strings, but can anyone think of a way to do it across files? I don't actually need to modify the file on disk - just load the content, modify it, and output it w/in the containing JSP file. I've been playing around with trying to load the files into strings via the apache readFileToString, but I can't figure out how to load files from a specific folder within the webapp's content directory without hardcoding it in and having to worry about it breaking if I deploy to a different system in the future.

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • How to start with AJAX/JSON in Zend?

    - by Awan
    I am working on some projects as a developer(PHP,MySQL) in which AJAX and jQuery is already implemented. But now I want to learn implementation of AJAX and jQuery stuff. Can anyone tell me the exact steps with explanation? I have created a project in Zend. There is only one controller(IndexController) and two actions(a and b) in my project now. Now I want to use ajax in my project. But I don't know how to start. I read some tutorial but unable to completely understand. I have index.phtml like this: <a href='index/a'>Load info A</a> <br/> <a href='index/b'>Load info B</a> <br /> <div id=one>load first here<div> <div id=two>load second here</div> Here index is controller in links. a and b are actions. now I have four files like this: a1.phtml I am a1 a2.phtml I am a2 b1.phtml I am b1 b2.phtml I am b2 I think you have got my point. When user clicks first link (Load info A) then a1.phtml should be loaded into div one and a2.phtml should be loaded into div two When user clicks second link (Load info B) then b1.phtml should be loaded into div one and b2.phtml should be loaded into div two And someone tell me the purpose of JSON in this process and how to use this also? Thanks

    Read the article

  • How to start AJAX in Zend?

    - by Awan
    I am working on some projects as a developer(PHP,MySQL) in which AJAX and jQuery is already implemented. But now I want to learn implementation of AJAX and jQuery stuff. Can anyone tell me the exact steps with explanation? I have created a project in Zend. There is only one controller(IndexController) and two actions(a and b) in my project now. Now I want to use ajax in my project. But I don't know how to start. I read some tutorial but unable to completely understand. I have index.phtml like this: <a href='index/a'>Load info A</a> <br/> <a href='index/b'>Load info B</a> <br /> <div id=one>load first here<div> <div id=two>load second here</div> Here index is controller in links. a and b are actions. now I have four files like this: a1.phtml I am a1 a2.phtml I am a2 b1.phtml I am b1 b2.phtml I am b2 I think you have got my point. When user clicks first link (Load info A) then a1.phtml should be loaded into div one and a2.phtml should be loaded into div two When user clicks second link (Load info B) then b1.phtml should be loaded into div one and b2.phtml should be loaded into div two And someone tell me the purpose of JSON in this process and how to use this also? Thanks

    Read the article

  • User Management: Managing users in user-defined "groups", database schema and logistics

    - by Kevin Brown
    I'm a noob, development wise and logistically-wise. I'm developing a site that lets people take a test... My client wants the ability for a user with the roll/privledge "admin" (a step below a super-admin) to be allowed to create users and only see/edit the users that they create... The users created in that "category" or group need some information that their superior provides. For example, I log in as a "manager", I have the ability to invite people to take the test, and manage those people. Before adding those people, I will have filled out a short survey about myself... Right now, the users that are invited will be asked some of the same questions as the manager. I'd like to cut down the redundancy by using the information put into the database by the manager and apply it to the invited users. How do I set up my database to work with this criterion? I'm a little confused about how to do this! Let me know if I can add more details... (This is a mysql and php app)

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >