Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 565/701 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • Setting javascript prototype function within object class declaration

    - by Tauren
    Normally, I've seen prototype functions declared outside the class definition, like this: function Container(param) { this.member = param; } Container.prototype.stamp = function (string) { return this.member + string; } var container1 = new Container('A'); alert(container1.member); alert(container1.stamp('X')); This code produces two alerts with the values "A" and "AX". I'd like to define the prototype function INSIDE of the class definition. Is there anything wrong with doing something like this? function Container(param) { this.member = param; if (!Container.prototype.stamp) { Container.prototype.stamp = function() { return this.member + string; } } } I was trying this so that I could access a private variable in the class. But I've discovered that if my prototype function references a private var, the value of the private var is always the value that was used when the prototype function was INITIALLY created, not the value in the object instance: Container = function(param) { this.member = param; var privateVar = param; if (!Container.prototype.stamp) { Container.prototype.stamp = function(string) { return privateVar + this.member + string; } } } var container1 = new Container('A'); var container2 = new Container('B'); alert(container1.stamp('X')); alert(container2.stamp('X')); This code produces two alerts with the values "AAX" and "ABX". I was hoping the output would be "AAX" and "BBX". I'm curious why this doesn't work, and if there is some other pattern that I could use instead.

    Read the article

  • CakePHP: Custom Function in bootstrap that uses $ajax->link not working

    - by nekko
    Hello I have two questions: (1) Is it best practice to create global custom functions in the bootstrap file? Is there a better place to store them? (2) I am unable use the following line of code in my custom function located in my bootstrap.php file: $url = $ajax->link ( 'Delete', array ('controller' => 'events', 'action' => 'delete', 22 ), array ('update' => 'event' ), 'Do you want to delete this event?' ); echo $url; I receive the following error: Notice (8): Undefined variable: ajax [APP\config\bootstrap.php, line 271] Code } function testAjax () { $url = $ajax->link ( 'Delete', array ('controller' => 'events', 'action' => 'delete', 22 ), array ('update' => 'event' ), 'Do you want to delete this event?' ); testAjax - APP\config\bootstrap.php, line 271 include - APP\views\event\queue.ctp, line 19 View::_render() - CORE\cake\libs\view\view.php, line 649 View::render() - CORE\cake\libs\view\view.php, line 372 Controller::render() - CORE\cake\libs\controller\controller.php, line 766 Dispatcher::_invoke() - CORE\cake\dispatcher.php, line 211 Dispatcher::dispatch() - CORE\cake\dispatcher.php, line 181 [main] - APP\webroot\index.php, line 91 However it works as intended if I place that same code in my view: <a onclick=" event.returnValue = false; return false;" id="link1656170149" href="/shout/events/delete/22">Delete</a> Please help :) Thanks in advance!!

    Read the article

  • How to get the title and subtitle for a pin when we are implementing the MKAnnotation?

    - by wolverine
    I have implemented the MKAnnotation as below. I will put a lot of pins and the information for each of those pins are stored in an array. Each member of this array is an object whose properties will give the values for the title and subtitle of the pin. Each object corresponds to a pin. But how can I display these values for the pin when I click a pin?? @interface UserAnnotation : NSObject <MKAnnotation> { CLLocationCoordinate2D coordinate; NSString *title; NSString *subtitle; NSString *city; NSString *province; } @property (nonatomic, assign) CLLocationCoordinate2D coordinate; @property (nonatomic, retain) NSString *title; @property (nonatomic, retain) NSString *subtitle; @property (nonatomic, retain) NSString *city; @property (nonatomic, retain) NSString *province; -(id)initWithCoordinate:(CLLocationCoordinate2D)c; And .m is @implementation UserAnnotation @synthesize coordinate, title, subtitle, city, province; - (NSString *)title { return title; } - (NSString *)subtitle { return subtitle; } - (NSString *)city { return city; } - (NSString *)province { return province; } -(id)initWithCoordinate:(CLLocationCoordinate2D)c { coordinate=c; NSLog(@"%f,%f",c.latitude,c.longitude); return self; } @end

    Read the article

  • I need to get form data from multiple forms on one page using $_POST

    - by CDeanMartin
    My project is a menu that displays daily specials at a cafe. The Pointy Haired Boss(PHB) needs to add/remove items from the menu on a daily basis, so I stored all dishes with MySQL, and created a page which will load all menu items as buttons. When clicked, the button will UPDATE the item, turning it on or off. I need form data to detect which button was pressed, so my query knows which $menuItem to UPDATE. That is the purpose of the hidden fields. <html><head></head> <body> <html><head></head> <body> <?php include("getElement.php"); $keys = array_keys($_POST); echo $keys[0]; echo $keys[1]; //if(isset($_POST["menuItem"])){ //toggleItem($_POST["menuItem"]); //echo print_r(array_keys($_POST));} ?> <form name="b" action="scratchpad.php" method="post" > <input type="hidden" name="b" value="Cajun Gumbo"/> <input type="submit" style="color:blue" value="Cajun Gumbo" /> </form> <form name="a" action="scratchpad.php" method="post" > <input type="hidden" name="a" value="Guacomole Burger"/> <input type="submit" style="color:blue" value="Guacomole Burger" /> </form> </body> </html> Can I get $_POST to identify which button was pressed?

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • What is jasper report's algorithm for using a data source?

    - by spderosso
    Hi, I have created my custom data source by implementing the interface JRDataSource. This interface looks like this: public interface JRDataSource { /** * Tries to position the cursor on the next element in the data source. * @return true if there is a next record, false otherwise * @throws JRException if any error occurs while trying to move to the next element */ public boolean next() throws JRException; /** * Gets the field value for the current position. * @return an object containing the field value. The object type must be the field object type. */ public Object getFieldValue(JRField jrField) throws JRException; } My question is the following: In what way does jasper report call this functions for obtaining the fields in the .jrxml. E.g: if( next() )){ call getFieldValue for every field present in the page header while( next() ){ call getFieldValue for every field present in detail part } call getFieldValue for every field present the footer } The previous is just an example, experimentally in fact I found out that it is actually not like that. So my question arised. Thanks!

    Read the article

  • backbone.js Model.get() returns undefined, scope using coffeescript + coffee toaster?

    - by benipsen
    I'm writing an app using coffeescript with coffee toaster (an awesome NPM module for stitching) that builds my app.js file. Lots of my application classes and templates require info about the current user so I have an instance of class User (extends Backbone.Model) stored as a property of my main Application class (extends Backbone.Router). As part of the initialization routine I grab the user from the server (which takes care of authentication, roles, account switching etc.). Here's that coffeescript: @user = new models.User @user.fetch() console.log(@user) console.log(@user.get('email')) The first logging statement outputs the correct Backbone.Model attributes object in the console just as it should: User _changing: false _escapedAttributes: Object _pending: Object _previousAttributes: Object _silent: Object attributes: Object account: Object created_on: "1983-12-13 00:00:00" email: "[email protected]" icon: "0" id: "1" last_login: "2012-06-07 02:31:38" name: "Ben Ipsen" roles: Object __proto__: Object changed: Object cid: "c0" id: "1" __proto__: ctor app.js:228 However, the second returns undefined despite the model attributes clearly being there in the console when logged. And just to make things even more interesting, typing "window.app.user.get('email')" into the console manually returns the expected value of "[email protected]"... ? Just for reference, here's how the initialize method compiles into my app.js file: Application.prototype.initialize = function() { var isMobile; isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|BlackBerry)/); this.helpers = new views.DOMHelpers().initialize().setup_viewport(isMobile); this.user = new models.User(); this.user.fetch(); console.log(this.user); console.log(this.user.get('email')); return this; }; I initialize the Application controller in my static HTML like so: jQuery(document).ready(function(){ window.app = new controllers.Application(); }); Suggestions please and thank you!

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Weird problem with string function

    - by wrongusername
    I'm having a weird problem with the following function, which returns a string with all the characters in it after a certain point: string after(int after, string word) { char temp[word.size() - after]; cout << word.size() - after << endl; //output here is as expected for(int a = 0; a < (word.size() - after); a++) { cout << word[a + after]; //and so is this temp[a] = word[a + after]; cout << temp[a]; //and this } cout << endl << temp << endl; //but output here does not always match what I want string returnString = temp; return returnString; } The thing is, when the returned string is 7 chars or less, it works just as expected. When the returned string is 8 chars or more, then it starts spewing nonsense at the end of the expected output. For example, the lines cout << after(1, "12345678") << endl; cout << after(1, "123456789") << endl; gives an output of: 7 22334455667788 2345678 2345678 8 2233445566778899 23456789?,?D~ 23456789?,?D~ What can I do to fix this error, and are there any default C++ functions that can do this for me?

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • Improving Performance on this Image Creation function

    - by Abs
    Hello all, I am making use of GD2 and the image functions to take in a string and then convert that into an image using different fonts at different sizes. The function I use is below. Currently, its pretty quick but not quick enough. The function gets called about 20 times per user and the images generated are always new ones (different) so caching isn't going to help! I was hoping to get some ideas on how to make this function faster. Maybe supply more RAM to the script running? Anything else that is specific to this PHP function? Anything else that I can do to tweak performance of this function? function generate_image($save_path, $text, $font_path, $font_size){ $font = $font_path; /* * I have simplifed the line below, its actually a function that works out the size of the box * that is need for each image as the image size is different based on font type, font size etc */ $measure = array('width' => 300, 'height'=> 120); if($measure['width'] > 900){ $measure['width'] = 900; } $im = imagecreatetruecolor($measure['width'], $measure['height']); $white = imagecolorallocate($im, 255, 255, 255); $black = imagecolorallocate($im, 0, 0, 0); imagefilledrectangle($im, 0, 0, $measure['width'], $measure['height'], $white); imagettftext($im, $font_size, 0, $measure['left'], $measure['top'], $black, $font, ' '.$text); if(imagepng($im, $save_path)){ $status = true; }else{ $status = false; } imagedestroy($im); return $status; } Thanks all for any help

    Read the article

  • Jquery class changing works, but doesn't effect another function like it should

    - by Qwibble
    So I have content boxes that close and expand when you click an arrow. The content box has two classes for telling whether it is open or closed (.box_open, .box_closed). A hover function is assigned to box_open so when it is open and you hover over the header, the arrow appears. However, I don't want this to happen when the box is closed, as I want to arrow to remain visible when the box is closed. When the box closes, the box_open class is removed, but the function assigned to that class still works. Here's the jquery code for the two functions. You can also see them in the head of the demo below. // Display Arrow on Box Header Hover $(".box_open").hover( function () { $(this).find('a').show(); }, function () { $(this).find('a').hide(); } ); // Open and Close Boxes: $(".box_header a").click( function () { $(this).parent().next('.box_border').stop().toggle(); $(this).parent().toggleClass("box_open"); $(this).parent().toggleClass("box_closed"); return false; } ); Can anyone take a look at what the problem is? Here's the demo url: Demo Url

    Read the article

  • Stubbing an ActsAs Rails Plugin

    - by Rabbott
    I need to create a plugin much like Authlogic (or even just add on to Authlogic), but due to requirements beyond my control I need my plugin to authenticate using SOAP. Basically the plugin would require that anyone accessing the controller (before_filter would be fine) would have to authenticate first. I have ZERO control over the login page, or the SOAP server, I am simply a client attempting to authenticate to the providers SOAP Web Service. Here is what happens.. before_filter realizes that no session[:credential] is set, and forwards the user to the url on the providers servers. The user enters their credentials, and once authenticated, the web service forwards the user to a URL that has been entered by their sysadmins, attaching a token to the url on its way back. I need to take that token, append it to some parameters stored in a local YAML file, and make the SOAP call to the providers server. If all goes as planned, I need to set session[:credential] to the result of the SOAP call, and forward the user to the root page. Subsequent calls to the before_filter will not make the SOAP call, because session[:credential] is set. Ideally I think this would be awesome to slap on top of Authlogic, but I'm not sure how to do this, So I started to create my own acts_as_soap_authentic plugin, which isn't causing errors, but doesn't do anything.. Anyone have any pointers, or tips as to how I can get the ball rolling here? It seems simple, but is proving not to be..

    Read the article

  • VSTO Word ContentControls, Y U No have Name property?

    - by System.Cats.Lol
    When you add a VSTO (not Word native) content control, you specify the name: controls.AddContentControl(wordRange, "foo", wdType); Where controls is the VSTO (extended) Document.Controls collection. You can later look up the control by name: ContentControl myContentControl = controls["foo"]; So why in the world is there no Name property for ContentControl? (or ContentControlBase, or any of the other derivatives). I'm implementing a wrapper class for the Document.Controls property that lets you add or iterate the content controls. When iterating the underlying Document.Controls, there's no way to look up the name of each control. (We need it to return an instance of our ContentControl wrapper). So currently I'm doing this in our ContentControls wrapper class: public IEnumerator<IContentControl> GetEnumerator() { System.Collections.IEnumerator en = this.wordControls.GetEnumerator(); while (en.MoveNext()) { // VSTO Document.Controls includes all managed controls, not just // VSTO ContentControls; return only those. if (en.Current is Microsoft.Office.Tools.Word.ContentControl) { // The control's name isn't stored with the control, only when it was added, // so use a placeholder name for the wrapper. yield return new ContentControl("Unknown", (Microsoft.Office.Tools.Word.ContentControl)en.Current); } } } I'd prefer to not have to resort to keeping a map of names-to-wrapper-objects in our ContentControls object. Can anyone tell me how to get the control's name (the name parameter that was passed to Controls.Add()?

    Read the article

  • Comprehending the Semantic web and it's methods, syntax, vocabularies and languages

    - by DreamCodeR
    Hi. I have just been introduced to the semantic web and it's family of functions but I have a hard time understanding some of it, which I was hoping someone could explain to me. As far as I've understood, RDF can be written in several syntaxes. RDF/XML, Turtle, etc. Now, I understand XML. How it is presented and how it can be parsed. However, some people write in the turtle syntax, but how do they parse that information? I can't seem to find a single library for any language to "extract" the information written in a turtle syntax into another form. The same goes for N3. How can it be used? Executed or else? I seem to be able to understand RDFa. That it is a way to implement RDF into XHTML. For me that is a way to implement RDF into "something". But how can I compare that to turtle, N3, or the like? Thanks in advance.

    Read the article

  • Google Calendar like interface

    - by John Virgolino
    I need to write an application that essentially functions like a week-view of a calendar, columns for the days and then rows for appointments. Where the height of the appointment box visually represents time. In my case, I just don't want the time of day as the vertical axis, I just want hours or mins. The Google AJAX approach is very clean and easy to use and would be perfect, I think, but my major knowledge comes in ASP.Net and Windows Forms (.Net). I don't want to reinvent the wheel, but I find my mind is stuck on this problem and that I would have to create an interface from scratch for this. I have checked out the Infragistics product (used it for other projects) and read up a lot on the Google API's including their Ajax toolkit. I haven't done Java, however learning a language is not my issue, it's learning the particulars that will help me reach my goal that I feel will take most of the time. Am I making a mountain out of a mole hill? Is this really a lot easier than I think? This is starting to sound like a Dear Abby post - I'll stop now. Any advice or insight would be great! Thanks all!

    Read the article

  • .net IHTTPHandler Streaming SQL Binary Data

    - by Yisman
    Hello everybody I am trying to implement an ihttphandeler for streaming files. files may be tiny thumbnails or gigantic movies the binaries r stored in sql server i looked at a lot of code online but something does not make sense isnt streaming supposed to read the data piece by piece and move it over the line? most of the code seems to first read the whole field from mssql to memory and then use streaming for the output writing wouldnt it b more eficient to actually stream from disk directly to http byte by byte (or buffered chunks?) heres my code so far but cant figure out the correct combination of the sqlreader mode and the stream object and the writing system Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.BufferOutput = False Dim FileField=safeparam(context.Request.QueryString("FileField")) Dim FileTable=safeparam(context.Request.QueryString("FileTable")) Dim KeyField=safeparam(context.Request.QueryString("KeyField")) Dim FileKey=safeparam(context.Request.QueryString("FileKey")) Using connection As New SqlConnection(ConfigurationManager.ConnectionStrings("Main").ConnectionString) Using command As New SqlCommand("SELECT " & FileField & "Bytes," & FileField & "Type FROM " & FileTable & " WHERE " & KeyField & "=" & FileKey, connection) command.CommandType = Data.CommandType.Text enbd using end using end sub please be aware that this sql command also returns the file extension (pdf,jpg,doc...) in the second field of the query thank you all very much

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • WYSIWYG in Doxygen

    - by Adam Shiemke
    I'm working on a fairly large project written in C. The idea was to build a library of modular blocks that can be reused across several platforms. Each module is assocaited with a word document in .docx format (huge pain to diff-merge). In these docs, an interface section is specified, listing datatypes and publicly accessable functions. These were often inconsistant with the actual implementation in code, and wading through all this documentation was a pain. I've been working to switch to doxygen to simplify document managemnet. I haven't found a good way to embed the previously written documentation into the doxygen output. I've copy-pasted them into sections and used modules to group the sources together, but the document sections look ugly in the comments (the output is pretty) and since doxygen takes a while to parse through our code (about 30 mins), validating formatting is a pain. Is there some way to WYSIWIG large blocks of documentation into doxygen? I feel this would improve the number of people documenting their code, and the quality of that documentation. I considered linking to html, but that splits out the documentation. I also considered putting them inline in html, but this also seems like a pain and would mean everyone needs a WYSIWIG HTML edditor (or some html skillz). Any ideas on how to make things easier and prettier? Thanks loads.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Image drawing library for Haskell?

    - by absz
    I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best. I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles). My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions? * That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • How to change scope data in controller?

    - by Derooie
    I am a real newbie at angular. I created something now which will let me retrieve and add items via angular and get/put them in mongodb. I use express and mongoose in the app The question is, how can i modify the data before it reaches the DOM in the controller. In this example i have created a way to retrieve data, and i get it exactly as it is stored in mongodb. What i would like is that the field where i store 1 or 0 in the database, to be shown as text. So if mongo has a value 1 i get "the value in mongo is 1" and when the field has a value of 0 get "the value is zero". (just as an example, i like other texts, but it illustrate what i want) I post my controller, html and current output. Any help would be appreciated. Controller function getGuests($scope, $http) { $scope.formData = {}; $http.get('/api/guests') .success(function(data) { $scope.x = data; }) .error(function(data) { console.log('Error: ' + data); }); } HTML <div ng-controller="getGuests"> <div ng-repeat="guest in x"> {{ guest.voornaam }} {{ guest.aanwezig }} </div> </div> The current scope output, what i see in HTML. I like to change only the value of "aanwezig" in case the value of aanwezig is 0 or 1. firstname1 1 firstname2 0 Something else, but would be great to learn, is how i can do a specific mongodb query by the push of a button and get that result.

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >