Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 346/416 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • SVN commit using cruise control

    - by pratap
    hi all, can any one tell how to tell svn that these files are to be deleted from repository through command line. i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • Check for modification failure in content Integration using VisualSVN Server and Cruisecontrol.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • error while loadin applet in web application

    - by pallavi
    I want to run my applet on web application, but i got some error which i mentioned below please help me to get out of this problem ============================================================================================= Java Plug-in 1.7.0 Using JRE version 1.7.0-ea-b116 Java HotSpot(TM) Client VM User home directory = C:\Users\HONEY c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to java.lang.RuntimeException: java.lang.NoClassDefFoundError: mp3$1 at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.NoClassDefFoundError: mp3$1 at mp3.(mp3.java:93) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at java.lang.Class.newInstance0(Unknown Source) at java.lang.Class.newInstance(Unknown Source) at sun.plugin2.applet.Plugin2Manager$12.run(Unknown Source) at java.awt.event.InvocationEvent.dispatch(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: mp3$1 at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadClass0(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 16 more Caused by: java.io.IOException: open HTTP connection failed:http://viscous10.webng.com/mp3/mp3$1.class at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 21 more Exception: java.lang.RuntimeException: java.lang.NoClassDefFoundError: mp3$1 ========================================================================================== but it happens only if i run applet with events and in simple applet i never occurred thanx

    Read the article

  • Creating an AJAX Searchable Database.

    - by Austin
    Currently I am using MySQLi to parse a CSV file into a Database, that step has been accomplished. However, My next step would be to make this Database searchable and automatically updated via jQuery.ajax(). Some people suggest that I print out the Database in another page and access it externally. I'm quite new to jquery + ajax so if anyone could point me in the right direction that would be greatly appreciated. I understand that the documentation on ajax should be enough to tell me what I'm looking for but it appears to talk only about retrieving data from an external file, what about from a mysql database? The code so far stands: <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> </head> <body> <input type="text" id="search" name="search" /> <input type="submit" value="submit"> <?php show_source(__FILE__); error_reporting(E_ALL);ini_set('display_errors', '1'); $category = NULL; $mc = new Memcache; $mc->addServer('localhost','11211'); $sql = new mysqli('localhost', 'user', 'pword', 'db'); $cache = $mc->get("updated_DB"); $query = 'SELECT cat,name,web,kw FROM infoDB WHERE cat LIKE ? OR name LIKE ? OR web LIKE ? OR kw LIKE ?'; $results = $sql->prepare($query); $results->bind_param('ssss', $query, $query, $query, $query); $results->execute(); $results->store_result(); ?> </body> </html>

    Read the article

  • Problem with libcurl cookie engine

    - by Seb Rose
    [Cross-posted from lib-curl mailing list] I have a single threaded app (MSVC C++ 2005) build against a static LIBCURL 7.19.4 A test application connects to an in house server & performs a bespoke authentication process that includes posting a couple of forms, and when this succeeds creates a new resource (POST) and then updates the resource (PUT) using If-Match. I only use a single connection to libcurl (i.e. only one CURL*) The cookie engine is enabled from the start using curl_easy_setopt(CURLOPT_COOKIEFILE, "") The cookie cache is cleared at the end of the authentication process using curl_easy_setopt(CURLOPT_COOKIELIST, "SESS"). This is required by the authentication process. The next call, which completes a successful authentication, results in a couple of security cookies being returned from the server - they have no expiry date set. The server (and I) expect the security cookies to then be sent with all subsequent requests to the server. The problem is that sometimes they are sent and sometimes they aren't. I'm not a CURL expert, so I'm probably doing something wrong, but I can't figure out what. Running the test app in a loop results shows a random distribution of correct cookie handling. As a workaround I've disabled the cookie engine and am doing basic manual cookie handling. Like this it works as expected, but I'd prefer to use the library if possible. Does anyone have any ideas? Thanks Seb

    Read the article

  • How to organize the work when project needs to be re-implemented due to poor code quality?

    - by Dmitriy Nagirnyak
    Hi, I have joined a very small where one main developer has been buiding the web app (.NET 4.0) during ~6 months. The project should be delivered within next 2 months. After first look at the code I can say that I would never allow it to go to production (things like catch { }, not tests at all with WebForms etc). So the code quality is incredibly low. My task is to improve that and still deliver the solution. So I plan to start with unit testing and MVC2 reimplementing most of the functionality (though using some of the existing code). I estimate that I will need about 6 weeks to catch up with the current progress and be on te same functionality level as the application will be in 6 months. The problem is that the main developer who has been working on the project does not seem to be very 'professional' and skillful (he seems to be really starting in IT and many basic things are unknown to him). It will take significant amount of time and effort to educate him how to do the proper testing, development and apply some patterns. I am ready to take responsibility for the reimplemnting the application but at the same time I don't want the main developer to be on idle but as he won't be able to significantly contribute to the better-world project at this stage I am not sure what would the best way to keep productivity high for both of us. Currently I think following solution is good enough: He proceeds doing what he does until I will catch up with him and then start working on a new project together. The problem is that of course this approach is not very productive as one developer will do better-world project while the other will proceed with what he did, effectively doing similar tasks. Can you suggest how we could better organise the work together in order to be most efficient for the overall project? Thanks, Dmitriy.

    Read the article

  • Java design: too many getters

    - by dege
    After writing a few lesser programs when learning Java the way I've designed the programs is with Model-View-Control. With using MVC I have a plethora of getter methods in the model for the view to use. It feels that while I gain on using MVC, for every new value added I have to add two new methods in the model which quickly get all cluttered with getter & setters. So I was thinking, maybe I should use the notifyObserver method that takes an argument. But wouldn't feel very smart to send every value by itself either so I figured, maybe if I send a kind of container with all the values, preferably only those that actually changed. What this would accomplish would be that instead of having a whole lot of getter methods I could just have one method in the model which put all relevant values in the container. Then in the view I would have a method called from the update which extracted the values from the container and assigning them to the correct fields. I have two questions concerning this. First: is this actually a viable way to do this. Would you recommend me doing something along these lines? Secondly: if I do use this plan and I don't want to keep sending fields that didn't actually change. How would I handle that without having to have if statements to check if the value is not null for every single value?

    Read the article

  • jQuery: Traversing AJAX response in Chrome/Safari

    - by jitzo
    I'm trying to traverse an AJAX response, which contains a remote web page (an HTML output). My goal is to iterate through the 'script', 'link', and 'title' elements of the remote page - load them if necessary, and embed its contents to the current page. Its working great in FF/IE, but for some reason - Chrome & Safari behaves differently: When I run a .each() loop on the response, Chrome/Safari seems to omit everything that is under the section of the page. Here's my current code: $.ajax({ url: 'remoteFile.php', cache: false, dataFilter: function(data) { console.log(data); /* The output seems to contain the entire response, including the <head> section - on all browsers, including Chrome/Safari */ $(data).filter("link, script, title").each(function(i) { console.log($(this)); /* IE/FF outputs all of the link/script/title elements, Chrome will output only those that are not in the <head> section */ }); console.log($(data)); /* This also outputs the incomplete structure on Chrome/Safari */ return data; }, success: function(response) {} }); I've been struggling with this problem for quite a while now, i've found some other similar cases on google searches, but no real solution. This happens on both jQuery 1.4.2, and jQuery 1.3.2. I really don't want to parse the response with .indexOf() and .substring() - it seems to me that it will be an overkill for the client. Many thanks in advance!

    Read the article

  • Atlas style map index for static google map

    - by Ben Holland
    Hello, I'm using a static google map, but really this problem could apply to any maps project. I want to divide a map into multiple quadrants (of say 50x50 pixels) and label the columns as A, B, C.... and the rows as 1, 2, 3... Next I plan to do something like, 1) Find the markers which are the farthest north, east, south, and west 2) Use that info to to define the bounding boxes of each row and column box 3) Classify each marker by its row and column (Example Marker 1 = [A,2]) A few requirements, I don't know the zoom level because I let Google set the zoom level appropriately for me and I would rather not use an algorithm that is dependent on a zoom level. I do however know the locations of all of the markers that are shown on the map. Here is an example of a map that I would like to classify the markers for, static map example link. I found these which look like a good start, Resource 1, Resource 2 But I think I'm still in need of some help getting started. Can anyone help write out some pseudo code or post a few more resources? I'm kind of in a rut at the moment. Thanks! Much appreciated of any help!

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • How to stop MVC caching the results of invoking and action method?

    - by Trey Carroll
    I am experiencing a problem with IE caching the results of an action method. Other articles I found were related to security and the [Authorize] attribute. This problem has nothing to do with security. This is a very simple "record a vote, grab the average, return the avg and the number of votes" method. The only slightly interesting thing about it is that it is invoked via Ajax and returns a Json object. I believe that it is the Json object that is getting catched. When I run it from FireFox and watch the XHR traffic with Firebug, everything works perfectly. However, under IE 8 the "throbber" graphic doesn't ever have time to show up and the page elements that display the "new" avg and count that are being injected into the page with jQuery are never different. I need a way to tell MVC to never cache this action method. This article seems to address the problem, but I cannot understand it: http://stackoverflow.com/questions/1441467/prevent-caching-of-attributes-in-asp-net-mvc-force-attribute-execution-every-tim I need a bit more context for the solution to understand how to extend AuthorizationAttribute. Please address your answer as if you were speaking to someone who lacks a deep understanding of MVC even if that means replying with an article on some basics/prerequisites that are required. Thanks, Trey Carroll

    Read the article

  • How Does a COM Program Locate a .NET DLL Registered for COM Interop?

    - by Eric J.
    One customer wants to consume our .NET DLLs from VB6. They are designed to support reverse interop and all works fine... except: There are two separate VB6 programs in two different directories. It seems it's necessary to do one of: Copy the .NET DLL into both directories, or Install the .NET DLL in the GAC This is the customer's observation and also supported by the RegAsm documentation: After registering an assembly using Regasm.exe, you can install it in the global assembly cache so that it can be activated from any COM client. If the assembly is only going to be activated by a single application, you can place it in that application's directory. I'm confused on this point. First point of confusion: As far as I understand, the COM runtime locates the DLL using the Prog ID / Class ID. When I look in the registry at the Class ID entry, I see the full path to the .NET DLL in the CodeBase key. Why is it that a COM program using the Prog ID / Class ID doesn't locate the .NET DLL using the CodeBase? Second point of confusion: The GAC is specific to .NET. How is it involved in resolving COM references?

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • [NSFetchedResultsController sections] returns nil?

    - by Chris
    Hi Everyone, I am trying to resolve this for days at this stage and I'm hoping you can help. I have two ViewControllers which query two different tables from the same database using Core Data. The first ViewController is opened with the app and displays fine. The second is called from within the first ViewController, using a pretty standard fetch setup: - (NSFetchedResultsController *)fetchedClients { // Set up the fetched results controller if needed. if (fetchedClients == nil) { // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Clients" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"clientsName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedClients = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedClients; } When I call [self.fetchedClients sections], I get a nil (0x0) return. I have examined the database using an external application to ensure data exists in the "Clients" table. Can anyone think of a reason why [self.fetchedClients sections] would return nil? Many thanks for any help you can provide. Regards, Chris

    Read the article

  • Fulltext search on many tables

    - by Rob
    I have three tables, all of which have a column with a fulltext index. The user will enter search terms into a single text box, and then all three tables will be searched. This is better explained with an example: documents doc_id name FULLTEXT table2 id doc_id a_field FULLTEXT table3 id doc_id another_field FULLTEXT (I realise this looks stupid but that's because I've removed all the other fields and tables to simplify it). So basically I want to do a fulltext search on name, a_field and another_field, and then show the results as a list of documents, preferably with what caused that document to be found, e.g. if another_field matched, I would display what another_field is. I began working on a system whereby three fulltext search queries are performed and the results inserted into a table with a structure like: search_results table_name row_id score (This could later be made to cache results for a few days with e.g. a hash of the search terms). This idea has two problems. The first is that the same document can be in the search results up to three times with different scores. Instead of that, if the search term is matched in two tables, it should have one result, but a higher score. The second is that parsing the results is difficult. I want to display a list of documents, but I don't immediately know the doc_id without a join of some kind; however the table to join to is dependant on the table_name column, and I'm not sure how to accomplish that. Wanting to search multiple related tables like this must be a common thing, so I guess what I'm asking is am I approaching this in the right way? Can someone tell me the best way of doing it please.

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • What is the best way to do testing database (MYSQL spesific)

    - by justjoe
    Right now i'm on testing something in a database. It's a wordpress database. i have to write and delete and do other operation on it. As you know it, it has indexing mechanism that will always make every new post inherit the next highest possible ID. Please consider that this database is a copying of used database. it has been written before. So, i will need to make sure when i finish my testing, it will be the same Right now, my only solution is making backup. So if i have end in some section of planned testing, i will backup it and start next testing on another copy of it. Fortunately, the size of database is only a small one. so delete and copy and backup it will be easy. but i know this way of database testing is only partial solution.It force me to create too many backup copy. I don't know what i will do if the database has bigger size. it will be a very long of testing nightmare. so i wonder is there any solution that work just like rollback. So it will just lock the database and just put new entry as some kind of cache. I can erase it or write it into the database. i use mysql and phpmyadmin and use it to developed some custom solution. EDIT ::: How to effectively doing testing on database when developing PHP solution ?

    Read the article

  • Huge mysql table with Zend Framework

    - by Uffo
    I have a mysql table with over 4 million of data; well the problem is that SOME queries WORK and SOME DON'T it depends on the search term, if the search term has a big volume of data in the table than I get the following error: Fatal error: Allowed memory size of 1048576000 bytes exhausted (tried to allocate 75 bytes) in /home/****/public_html/Zend/Db/Statement/Pdo.php on line 290 I currently have Zend Framework cache for metadata enabled, I have index on all the fields from that table.The site is running on a dedicated server with 2gb of ram. I've also set memory limit to: ini_set("memory_limit","1000M"); Any other things that I can optimize? Those are the types of query that I'm currently using: $do = $this->select() ->where('branche LIKE ?','%'.mysql_escape_string($branche).'%') ->order('premium DESC'); } //For name if(empty($branche) && empty($plz)) { $do = $this->select("MATCH(`name`) AGAINST ('{$theString}') AS score") ->where('MATCH(`name`) AGAINST( ? IN BOOLEAN MODE)', $theString) ->order('premium DESC, score'); } And a few other, but they are pretty much the same. Best Regards

    Read the article

  • Flex-built SWF's no longer work, error 2048, 2046, 2032

    - by Kevin
    I'm really confused about this problem, and I'm pretty new to Flex. Basically, anything I try to build with mxmlc fails to run now, giving me the above three errors depending on what I do. It was working 30 minutes ago, I've been spending that time trying to figure out what has changed. I redownloaded the Flex SDK, cleared my assetcache, have cleared Firefox's cache. (I'm using Linux.) Even if I compile with -static-link-runtime-shared-libraries=false, since it seems like #2048 is a RSL problem, it still refuses to run. Another strange thing, if I keep <policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url> <rsl-url>textLayout_1.0.0.595.swz</rsl-url> in my flex-config file, then firebug tells me that my swf file is trying to access a copy of that in the app's folder, giving error 2032. And if I stick the one I have in frameworks/rsls/ then it gives me error 2046. I don't know how it could not be properly signed, unless Adobe magically changed a signature and didn't update their flex SDK. Any help will be appreciated.

    Read the article

  • App session cookie not being created in Rails, sporadically

    - by James
    Hi everyone, This is an issue sporadically for very few users, however we haven't been able to replicate it. However I have now got a Chrome instance (Mac) which is reproducing the error (for some unknown reason), and I hope to not restart it until I have this nailed! Rails application, using memcached for session store. While the bug manifests in the _app_session_id cookie not being created, our javascript-generated cookie test and app-generated language cookies are being created successfully. This means that InvalidAuthenticityToken errors are thrown for every form that is submitted by those afflicted - people can't log into the app. The error occurs across all browsers - had reports for IE7 and Firefox (which most users use). Switching to another browser often fixes the issue (though not always), and standard cache-cookie-clear tactics do not. So now that I have got Chrome open which is having the same issue - in development, staging and live environments (meaning http and https). All other browsers are fine. I've restarted the servers and restarted memcached. I don't really want to restart Chrome - in the risk that the issue does go away with that (having said that, it hasn't worked for users). I've been tcpdumping the requests - and although I'll keep digging, I'd love it if anyone had any suggestions, places to start looking, anything. This is really painful ;) Thanks!

    Read the article

  • Synchronization of Nested Data Structures between Threads in Java

    - by Dominik
    I have a cache implementation like this: class X { private final Map<String, ConcurrentMap<String, String>> structure = new HashMap...(); public String getValue(String context, String id) { // just assume for this example that there will be always an innner map final ConcurrentMap<String, String> innerStructure = structure.get(context); String value = innerStructure.get(id); if(value == null) { synchronized(structure) { // can I be sure, that this inner map will represent the last updated // state from any thread? value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } } return value; } } Is this implementation thread-safe? Can I be sure to get the last updated state from any thread inside the synchronized block? Would this behaviour change if I use a java.util.concurrent.ReentrantLock instead of a synchronized block, like this: ... if(lock.tryLock(3, SECONDS)) { try { value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } finally { lock.unlock(); } } ... I know that final instance members are synchronized between threads, but is this also true for the objects held by these members? Maybe this is a dumb question, but I don't know how to test it to be sure, that it works on every OS and every architecture.

    Read the article

  • CIE XYZ colorspace: is it RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0 ) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut, therefore allowing for all colors to be reproduced. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore. My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA and RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • Compile C++ file as objective-c++ using makefile

    - by Vikas
    I'm trying to compile .cpp files as objective-c++ using makefile as few of my cpp file have objective code. I added -x objective-c++ as complier option and started getting stray /327 in program error( and lots of similar error with different numbers after /). The errors are around 200. But when I change the encoding of the file from unicode-8 to 16 the error reduces to 23. currently there is no objective-c++ code in the .cpp file but plan to add in future. When i remove -x objective-c++ from complier option ,everything complies fine. and .out is generated. I would be helpful if someone will tell me why this is happening and even a solution for the same Thanks in advance example of my makefile <code> MACHINE= $(shell uname -s) CFLAGS?=-w -framework CoreServices -framework ApplicationServices -framework CoreFoundation -framework CoreWLAN -framework Cocoa -framework Foundation ifeq ($(MACHINE),Darwin) CCLINK?= -lpthread else CCLINK?= -lpthread -lrt endif DEBUG?= -g -rdynamic -ggdb CCOPT= $(CFLAGS) $(ARCH) $(PROF) CC =g++ -x objective-c++ AR = ar rcs #lib name SLIB_NAME=myapplib EXENAME = myapp.out OBJDIR = build OBJLIB := $(addprefix $(OBJDIR)/... all .o files) SS_OBJ := $(addprefix $(OBJDIR)/,myapp.o ) vpath %.cpp path to my .cpp files INC = include files subsystem: make all $(OBJLIB) : |$(OBJDIR) $(OBJDIR): mkdir $(OBJDIR) $(OBJDIR)/%.o:%.cpp $(CC) -c $(INC) $(CCOPT) $(DEBUG) $(CCLINK) $< -o $@ all: $(OBJLIB) $(CLI_OBJ) $(SS_OBJ) $(AR) lib$(SLIB_NAME).a $(OBJLIB) $(CC) $(INC) $(CCOPT) $(SS_OBJ) $(DEBUG) $(CCLINK) -l$(SLIB_NAME) -L ./ -o $(OBJDIR)/$(EXENAME) clean: rm -rf $(OBJDIR)/* dep: $(CC) -MM *.cpp </code>

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >